Jan 07 09:49:31 crc systemd[1]: Starting Kubernetes Kubelet... Jan 07 09:49:31 crc kubenswrapper[5131]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 07 09:49:31 crc kubenswrapper[5131]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 07 09:49:31 crc kubenswrapper[5131]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 07 09:49:31 crc kubenswrapper[5131]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 07 09:49:31 crc kubenswrapper[5131]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 07 09:49:31 crc kubenswrapper[5131]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.940824 5131 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943748 5131 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943765 5131 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943769 5131 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943773 5131 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943777 5131 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943780 5131 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943784 5131 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943788 5131 feature_gate.go:328] unrecognized feature gate: Example2 Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943791 5131 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943794 5131 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943798 5131 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943803 5131 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943807 5131 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943811 5131 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943815 5131 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943819 5131 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943823 5131 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943843 5131 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943849 5131 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943853 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943857 5131 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943869 5131 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943873 5131 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943876 5131 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943880 5131 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943883 5131 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943886 5131 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943890 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943893 5131 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943898 5131 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943903 5131 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943907 5131 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943910 5131 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943914 5131 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943917 5131 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943921 5131 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943924 5131 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943928 5131 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943931 5131 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943935 5131 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943938 5131 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943942 5131 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943945 5131 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943949 5131 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943952 5131 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943956 5131 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943959 5131 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943962 5131 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943966 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943969 5131 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943972 5131 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943976 5131 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943979 5131 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943982 5131 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943987 5131 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943990 5131 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943993 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.943998 5131 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944002 5131 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944006 5131 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944009 5131 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944012 5131 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944015 5131 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944019 5131 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944022 5131 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944025 5131 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944028 5131 feature_gate.go:328] unrecognized feature gate: Example Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944032 5131 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944035 5131 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944038 5131 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944042 5131 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944045 5131 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944048 5131 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944051 5131 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944055 5131 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944058 5131 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944062 5131 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944065 5131 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944068 5131 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944072 5131 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944075 5131 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944079 5131 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944082 5131 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944085 5131 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944088 5131 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944091 5131 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944448 5131 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944454 5131 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944457 5131 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944460 5131 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944464 5131 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944467 5131 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944470 5131 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944474 5131 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944477 5131 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944480 5131 feature_gate.go:328] unrecognized feature gate: Example Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944483 5131 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944487 5131 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944490 5131 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944493 5131 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944496 5131 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944499 5131 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944502 5131 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944506 5131 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944509 5131 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944512 5131 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944516 5131 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944519 5131 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944522 5131 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944525 5131 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944529 5131 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944532 5131 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944536 5131 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944540 5131 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944543 5131 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944546 5131 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944550 5131 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944553 5131 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944556 5131 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944560 5131 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944565 5131 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944569 5131 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944573 5131 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944576 5131 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944580 5131 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944583 5131 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944586 5131 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944590 5131 feature_gate.go:328] unrecognized feature gate: Example2 Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944593 5131 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944596 5131 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944600 5131 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944603 5131 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944606 5131 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944609 5131 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944612 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944616 5131 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944621 5131 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944624 5131 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944627 5131 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944630 5131 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944634 5131 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944637 5131 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944642 5131 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944645 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944648 5131 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944651 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944654 5131 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944658 5131 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944661 5131 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944664 5131 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944667 5131 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944670 5131 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944675 5131 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944679 5131 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944682 5131 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944686 5131 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944689 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944693 5131 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944697 5131 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944701 5131 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944705 5131 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944709 5131 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944712 5131 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944715 5131 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944718 5131 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944721 5131 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944725 5131 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944728 5131 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944732 5131 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944736 5131 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944739 5131 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.944742 5131 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945080 5131 flags.go:64] FLAG: --address="0.0.0.0" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945091 5131 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945099 5131 flags.go:64] FLAG: --anonymous-auth="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945104 5131 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945109 5131 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945113 5131 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945117 5131 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945123 5131 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945127 5131 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945131 5131 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945135 5131 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945139 5131 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945143 5131 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945148 5131 flags.go:64] FLAG: --cgroup-root="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945151 5131 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945155 5131 flags.go:64] FLAG: --client-ca-file="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945158 5131 flags.go:64] FLAG: --cloud-config="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945162 5131 flags.go:64] FLAG: --cloud-provider="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945166 5131 flags.go:64] FLAG: --cluster-dns="[]" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945171 5131 flags.go:64] FLAG: --cluster-domain="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945175 5131 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945178 5131 flags.go:64] FLAG: --config-dir="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945182 5131 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945186 5131 flags.go:64] FLAG: --container-log-max-files="5" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945190 5131 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945194 5131 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945197 5131 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945201 5131 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945205 5131 flags.go:64] FLAG: --contention-profiling="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945208 5131 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945212 5131 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945216 5131 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945219 5131 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945225 5131 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945229 5131 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945233 5131 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945237 5131 flags.go:64] FLAG: --enable-load-reader="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945243 5131 flags.go:64] FLAG: --enable-server="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945246 5131 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945251 5131 flags.go:64] FLAG: --event-burst="100" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945255 5131 flags.go:64] FLAG: --event-qps="50" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945259 5131 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945262 5131 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945266 5131 flags.go:64] FLAG: --eviction-hard="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945271 5131 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945274 5131 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945279 5131 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945283 5131 flags.go:64] FLAG: --eviction-soft="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945287 5131 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945291 5131 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945294 5131 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945298 5131 flags.go:64] FLAG: --experimental-mounter-path="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945301 5131 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945305 5131 flags.go:64] FLAG: --fail-swap-on="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945308 5131 flags.go:64] FLAG: --feature-gates="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945313 5131 flags.go:64] FLAG: --file-check-frequency="20s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945316 5131 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945320 5131 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945323 5131 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945327 5131 flags.go:64] FLAG: --healthz-port="10248" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945331 5131 flags.go:64] FLAG: --help="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945335 5131 flags.go:64] FLAG: --hostname-override="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945338 5131 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945341 5131 flags.go:64] FLAG: --http-check-frequency="20s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945345 5131 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945349 5131 flags.go:64] FLAG: --image-credential-provider-config="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945353 5131 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945356 5131 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945359 5131 flags.go:64] FLAG: --image-service-endpoint="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945366 5131 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945370 5131 flags.go:64] FLAG: --kube-api-burst="100" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945374 5131 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945377 5131 flags.go:64] FLAG: --kube-api-qps="50" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945381 5131 flags.go:64] FLAG: --kube-reserved="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945384 5131 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945388 5131 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945391 5131 flags.go:64] FLAG: --kubelet-cgroups="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945395 5131 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945398 5131 flags.go:64] FLAG: --lock-file="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945402 5131 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945406 5131 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945410 5131 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945416 5131 flags.go:64] FLAG: --log-json-split-stream="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945419 5131 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945423 5131 flags.go:64] FLAG: --log-text-split-stream="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945427 5131 flags.go:64] FLAG: --logging-format="text" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945430 5131 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945434 5131 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945438 5131 flags.go:64] FLAG: --manifest-url="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945441 5131 flags.go:64] FLAG: --manifest-url-header="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945446 5131 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945470 5131 flags.go:64] FLAG: --max-open-files="1000000" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945475 5131 flags.go:64] FLAG: --max-pods="110" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945479 5131 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945483 5131 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945487 5131 flags.go:64] FLAG: --memory-manager-policy="None" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945490 5131 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945494 5131 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945598 5131 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945602 5131 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945613 5131 flags.go:64] FLAG: --node-status-max-images="50" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945618 5131 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945622 5131 flags.go:64] FLAG: --oom-score-adj="-999" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945626 5131 flags.go:64] FLAG: --pod-cidr="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945630 5131 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945636 5131 flags.go:64] FLAG: --pod-manifest-path="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945639 5131 flags.go:64] FLAG: --pod-max-pids="-1" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945643 5131 flags.go:64] FLAG: --pods-per-core="0" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945647 5131 flags.go:64] FLAG: --port="10250" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945651 5131 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945655 5131 flags.go:64] FLAG: --provider-id="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945659 5131 flags.go:64] FLAG: --qos-reserved="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945663 5131 flags.go:64] FLAG: --read-only-port="10255" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945667 5131 flags.go:64] FLAG: --register-node="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945670 5131 flags.go:64] FLAG: --register-schedulable="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945674 5131 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945680 5131 flags.go:64] FLAG: --registry-burst="10" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945684 5131 flags.go:64] FLAG: --registry-qps="5" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945687 5131 flags.go:64] FLAG: --reserved-cpus="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945691 5131 flags.go:64] FLAG: --reserved-memory="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945695 5131 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945699 5131 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945703 5131 flags.go:64] FLAG: --rotate-certificates="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945706 5131 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945710 5131 flags.go:64] FLAG: --runonce="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945714 5131 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945718 5131 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945722 5131 flags.go:64] FLAG: --seccomp-default="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945725 5131 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945728 5131 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945732 5131 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945738 5131 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945742 5131 flags.go:64] FLAG: --storage-driver-password="root" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945747 5131 flags.go:64] FLAG: --storage-driver-secure="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945751 5131 flags.go:64] FLAG: --storage-driver-table="stats" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945755 5131 flags.go:64] FLAG: --storage-driver-user="root" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945759 5131 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945764 5131 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945768 5131 flags.go:64] FLAG: --system-cgroups="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945772 5131 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945778 5131 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945782 5131 flags.go:64] FLAG: --tls-cert-file="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945786 5131 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945792 5131 flags.go:64] FLAG: --tls-min-version="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945795 5131 flags.go:64] FLAG: --tls-private-key-file="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945799 5131 flags.go:64] FLAG: --topology-manager-policy="none" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945803 5131 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945807 5131 flags.go:64] FLAG: --topology-manager-scope="container" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945811 5131 flags.go:64] FLAG: --v="2" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945817 5131 flags.go:64] FLAG: --version="false" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945822 5131 flags.go:64] FLAG: --vmodule="" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945852 5131 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.945859 5131 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.945969 5131 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.945973 5131 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.945977 5131 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.945981 5131 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.945984 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.945988 5131 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.945991 5131 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.945996 5131 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.945999 5131 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946002 5131 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946008 5131 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946011 5131 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946016 5131 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946019 5131 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946022 5131 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946026 5131 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946029 5131 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946032 5131 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946035 5131 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946038 5131 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946042 5131 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946046 5131 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946050 5131 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946054 5131 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946057 5131 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946062 5131 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946065 5131 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946069 5131 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946072 5131 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946076 5131 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946080 5131 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946083 5131 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946086 5131 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946090 5131 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946093 5131 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946096 5131 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946099 5131 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946103 5131 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946106 5131 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946109 5131 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946112 5131 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946115 5131 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946120 5131 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946124 5131 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946128 5131 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946132 5131 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946135 5131 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946138 5131 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946141 5131 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946144 5131 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946148 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946151 5131 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946154 5131 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946157 5131 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946160 5131 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946164 5131 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946167 5131 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946170 5131 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946174 5131 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946177 5131 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946180 5131 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946184 5131 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946187 5131 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946190 5131 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946193 5131 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946196 5131 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946200 5131 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946203 5131 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946206 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946209 5131 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946212 5131 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946216 5131 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946219 5131 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946222 5131 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946227 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946230 5131 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946235 5131 feature_gate.go:328] unrecognized feature gate: Example Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946239 5131 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946242 5131 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946245 5131 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946248 5131 feature_gate.go:328] unrecognized feature gate: Example2 Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946251 5131 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946254 5131 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946258 5131 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946261 5131 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.946264 5131 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.946427 5131 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.960464 5131 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.960505 5131 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960638 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960651 5131 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960659 5131 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960668 5131 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960675 5131 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960683 5131 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960690 5131 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960698 5131 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960706 5131 feature_gate.go:328] unrecognized feature gate: Example Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960713 5131 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960720 5131 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960728 5131 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960735 5131 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960742 5131 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960768 5131 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960775 5131 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960782 5131 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960790 5131 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960797 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960804 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960810 5131 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960817 5131 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960825 5131 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960860 5131 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960868 5131 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960875 5131 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960882 5131 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960890 5131 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960897 5131 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960905 5131 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960912 5131 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960920 5131 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960927 5131 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960935 5131 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960942 5131 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960949 5131 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960956 5131 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960963 5131 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960970 5131 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960978 5131 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960985 5131 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960992 5131 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.960999 5131 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961006 5131 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961014 5131 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961022 5131 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961029 5131 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961053 5131 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961061 5131 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961068 5131 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961076 5131 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961083 5131 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961090 5131 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961097 5131 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961104 5131 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961112 5131 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961119 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961126 5131 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961132 5131 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961140 5131 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961147 5131 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961154 5131 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961161 5131 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961169 5131 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961176 5131 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961183 5131 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961190 5131 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961197 5131 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961204 5131 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961211 5131 feature_gate.go:328] unrecognized feature gate: Example2 Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961218 5131 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961225 5131 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961232 5131 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961240 5131 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961247 5131 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961255 5131 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961264 5131 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961271 5131 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961278 5131 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961288 5131 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961920 5131 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961937 5131 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961948 5131 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961958 5131 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961967 5131 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.961977 5131 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.961991 5131 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.962989 5131 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963006 5131 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963018 5131 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963029 5131 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963038 5131 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963046 5131 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963055 5131 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963063 5131 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963071 5131 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963079 5131 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963087 5131 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963103 5131 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963111 5131 feature_gate.go:328] unrecognized feature gate: Example Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963118 5131 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963126 5131 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963134 5131 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963141 5131 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963148 5131 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963157 5131 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963164 5131 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963172 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963180 5131 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963188 5131 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963200 5131 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963208 5131 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963215 5131 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963288 5131 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963297 5131 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963306 5131 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963315 5131 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963323 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963331 5131 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963339 5131 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963346 5131 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963354 5131 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963362 5131 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963375 5131 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963383 5131 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963390 5131 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963398 5131 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963405 5131 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963475 5131 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963486 5131 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963493 5131 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963502 5131 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963510 5131 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963518 5131 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963556 5131 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963564 5131 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963571 5131 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963580 5131 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963587 5131 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963595 5131 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963602 5131 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963617 5131 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963625 5131 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963633 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963640 5131 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963648 5131 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963718 5131 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963755 5131 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963795 5131 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963869 5131 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963914 5131 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963920 5131 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963928 5131 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963934 5131 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963939 5131 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963943 5131 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963947 5131 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963951 5131 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963956 5131 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963960 5131 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963964 5131 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963972 5131 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.963976 5131 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.964290 5131 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.964298 5131 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.964302 5131 feature_gate.go:328] unrecognized feature gate: Example2 Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.964308 5131 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.964312 5131 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.964320 5131 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.964326 5131 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.964330 5131 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.964335 5131 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 07 09:49:31 crc kubenswrapper[5131]: W0107 09:49:31.964339 5131 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.964347 5131 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.964939 5131 server.go:962] "Client rotation is on, will bootstrap in background" Jan 07 09:49:31 crc kubenswrapper[5131]: E0107 09:49:31.968693 5131 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.973009 5131 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.973147 5131 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.974106 5131 server.go:1019] "Starting client certificate rotation" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.974310 5131 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.974744 5131 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.985236 5131 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 07 09:49:31 crc kubenswrapper[5131]: E0107 09:49:31.987501 5131 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.988084 5131 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 07 09:49:31 crc kubenswrapper[5131]: I0107 09:49:31.999499 5131 log.go:25] "Validated CRI v1 runtime API" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.027680 5131 log.go:25] "Validated CRI v1 image API" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.029543 5131 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.032154 5131 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-07-09-40-56-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.032193 5131 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.058161 5131 manager.go:217] Machine: {Timestamp:2026-01-07 09:49:32.055862675 +0000 UTC m=+0.222164339 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649922048 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:8ea6fa36-73d5-4d37-aab0-72c44945d452 BootID:bd75f290-f432-4d83-b44b-78dd53c6e94f Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824958976 Type:vfs Inodes:4107656 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107656 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f5:b4:6e Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f5:b4:6e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ce:5f:5b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ec:b6:0d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:af:dd:f8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e9:c3:b9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:ff:49:0a:58:b2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fe:1c:d4:87:c5:8a Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649922048 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.058528 5131 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.058781 5131 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.060181 5131 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.060244 5131 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.060657 5131 topology_manager.go:138] "Creating topology manager with none policy" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.060685 5131 container_manager_linux.go:306] "Creating device plugin manager" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.060735 5131 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.061195 5131 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.061705 5131 state_mem.go:36] "Initialized new in-memory state store" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.062098 5131 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.062955 5131 kubelet.go:491] "Attempting to sync node with API server" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.062992 5131 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.063022 5131 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.063051 5131 kubelet.go:397] "Adding apiserver pod source" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.063078 5131 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.065697 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.065874 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.066638 5131 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.066696 5131 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.069017 5131 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.069049 5131 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.070871 5131 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.071190 5131 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.072304 5131 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073003 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073048 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073069 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073088 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073107 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073126 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073145 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073163 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073188 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073219 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073244 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.073464 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.074531 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.074565 5131 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.075622 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.089794 5131 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.089915 5131 server.go:1295] "Started kubelet" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.090218 5131 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.090809 5131 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.091003 5131 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.091722 5131 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 07 09:49:32 crc systemd[1]: Started Kubernetes Kubelet. Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.094774 5131 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.095516 5131 server.go:317] "Adding debug handlers to kubelet server" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.095703 5131 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.095738 5131 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.095760 5131 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.096353 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.096415 5131 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.092387 5131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.220:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188869f90aad597f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.089825663 +0000 UTC m=+0.256127257,LastTimestamp:2026-01-07 09:49:32.089825663 +0000 UTC m=+0.256127257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.098256 5131 factory.go:55] Registering systemd factory Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.098305 5131 factory.go:223] Registration of the systemd container factory successfully Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.098825 5131 factory.go:153] Registering CRI-O factory Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.098686 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="200ms" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.098856 5131 factory.go:223] Registration of the crio container factory successfully Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.098949 5131 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.098968 5131 factory.go:103] Registering Raw factory Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.098981 5131 manager.go:1196] Started watching for new ooms in manager Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.099454 5131 manager.go:319] Starting recovery of all containers Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.100261 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.118924 5131 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.137407 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.137463 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.137481 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.137495 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.137509 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.137523 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141565 5131 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141640 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141646 5131 manager.go:324] Recovery completed Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141666 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141695 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141715 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141732 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141749 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141777 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141804 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141827 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141874 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141894 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141913 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141932 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141951 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.141975 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142001 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142023 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142042 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142062 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142123 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142145 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142165 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142221 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142242 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142261 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142305 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142326 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142347 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142367 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142386 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142404 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142422 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142440 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142458 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142481 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142505 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142550 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142569 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142588 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142607 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142626 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142647 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142667 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142687 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142705 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142723 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142739 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142757 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142781 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142799 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142826 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142878 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142897 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142917 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142936 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142955 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.142976 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143003 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143055 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143081 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143100 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143117 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143148 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143168 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143185 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143205 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143223 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143248 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143266 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143284 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143304 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143322 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143339 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143359 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143378 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143397 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143414 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143430 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143447 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143466 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143485 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143503 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143518 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143539 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143557 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143586 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143607 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143627 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143644 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143671 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143696 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143717 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.143734 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144490 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144516 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144550 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144580 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144610 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144630 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144651 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144689 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144708 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144883 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144907 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144935 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.144957 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145022 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145048 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145066 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145217 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145242 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145261 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145288 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145306 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145328 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145347 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145369 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145393 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145413 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145438 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145458 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.145494 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.148542 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.148624 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.148667 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.148692 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.148811 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.148868 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.148890 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.148920 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.148999 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149056 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149076 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149095 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149147 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149167 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149193 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149213 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149300 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149322 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149342 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149368 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149415 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149443 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149637 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149664 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149686 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149748 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149775 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149794 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149894 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149913 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149932 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.149963 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150044 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150073 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150162 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150218 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150268 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150287 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150314 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150334 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150361 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150383 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150411 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150430 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150477 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150503 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150523 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150549 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150570 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150684 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150705 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150739 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150783 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150803 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150827 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150870 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150889 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150938 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150956 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.150978 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151063 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151117 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151176 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151196 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151221 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151240 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151263 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151308 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151375 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151393 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151411 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151436 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151505 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.151699 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152181 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152213 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152252 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152290 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152349 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152379 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152414 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152466 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152495 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152553 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152583 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152620 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152648 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152683 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152735 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152789 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152898 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152922 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152947 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152967 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.152987 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153048 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153191 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153217 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153232 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153259 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153278 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153291 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153307 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153320 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153338 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153351 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153363 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153379 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153393 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153411 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153464 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153477 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153496 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153508 5131 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153522 5131 reconstruct.go:97] "Volume reconstruction finished" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.153532 5131 reconciler.go:26] "Reconciler: start to sync state" Jan 07 09:49:32 crc kubenswrapper[5131]: W0107 09:49:32.163812 5131 watcher.go:93] Error while processing event ("/sys/fs/cgroup/system.slice/ocp-mco-sshkey.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/system.slice/ocp-mco-sshkey.service: no such file or directory Jan 07 09:49:32 crc kubenswrapper[5131]: W0107 09:49:32.165240 5131 watcher.go:93] Error while processing event ("/sys/fs/cgroup/system.slice/ocp-userpasswords.service": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent /sys/fs/cgroup/system.slice/ocp-userpasswords.service: no such file or directory Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.169406 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.170823 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.170939 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.170975 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.171799 5131 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.171877 5131 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.172001 5131 state_mem.go:36] "Initialized new in-memory state store" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.175977 5131 policy_none.go:49] "None policy: Start" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.176015 5131 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.176033 5131 state_mem.go:35] "Initializing new in-memory state store" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.178752 5131 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.178816 5131 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.178880 5131 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.178899 5131 kubelet.go:2451] "Starting kubelet main sync loop" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.178964 5131 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.180300 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.196735 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.225198 5131 manager.go:341] "Starting Device Plugin manager" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.225428 5131 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.225450 5131 server.go:85] "Starting device plugin registration server" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.226002 5131 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.226023 5131 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.226153 5131 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.226355 5131 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.226386 5131 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.231090 5131 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.231152 5131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.279435 5131 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.279571 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.280436 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.280463 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.280472 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.280992 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.281259 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.281313 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.281380 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.281422 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.281437 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.281861 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.281885 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.281893 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.282374 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.282446 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.282490 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.282926 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.282951 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.282964 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.283010 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.283041 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.283057 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.283872 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.284142 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.284223 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.284298 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.284347 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.284389 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.284942 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.284980 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.284994 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.285347 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.285425 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.285467 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.285893 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.285926 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.285926 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.285968 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.285988 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.285940 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.287162 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.287213 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.287714 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.287746 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.287759 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.299801 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="400ms" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.314025 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.327809 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.328636 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.328674 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.328690 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.328715 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.329240 5131 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.333565 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.341058 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360003 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360249 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360284 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360310 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.360395 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360432 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360513 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360622 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360646 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360665 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360700 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360725 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360780 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360809 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360850 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360875 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360897 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360905 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360922 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360946 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360967 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.360987 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.361006 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.361027 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.361045 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.361047 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.361086 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.361113 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.361138 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.361358 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.361561 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.367505 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462091 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462137 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462163 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462194 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462280 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462342 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462361 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462424 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462469 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462445 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462569 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462615 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462653 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462682 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462714 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462742 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462771 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462800 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462869 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462902 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.462938 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.463425 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.463488 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.463557 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.463606 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.463669 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.463696 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.463720 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.463760 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.463814 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.463816 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.463888 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.530300 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.532075 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.532145 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.532171 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.532213 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.532923 5131 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.614603 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.639215 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.642061 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: W0107 09:49:32.648530 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-a7f968c7248c04fbdaccdd5986fce0d6426b7a9912acd67d254b56d30ac7411c WatchSource:0}: Error finding container a7f968c7248c04fbdaccdd5986fce0d6426b7a9912acd67d254b56d30ac7411c: Status 404 returned error can't find the container with id a7f968c7248c04fbdaccdd5986fce0d6426b7a9912acd67d254b56d30ac7411c Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.654523 5131 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.661135 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.667962 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:49:32 crc kubenswrapper[5131]: W0107 09:49:32.678323 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-1b1e5b71e507398c99448834848504cf578d6838c60d756fdc3fc321af83db7e WatchSource:0}: Error finding container 1b1e5b71e507398c99448834848504cf578d6838c60d756fdc3fc321af83db7e: Status 404 returned error can't find the container with id 1b1e5b71e507398c99448834848504cf578d6838c60d756fdc3fc321af83db7e Jan 07 09:49:32 crc kubenswrapper[5131]: W0107 09:49:32.689111 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-73f17549161096281aa1f08ec088e2b14a448b1e4a72c94da203cf51d3d6bc8d WatchSource:0}: Error finding container 73f17549161096281aa1f08ec088e2b14a448b1e4a72c94da203cf51d3d6bc8d: Status 404 returned error can't find the container with id 73f17549161096281aa1f08ec088e2b14a448b1e4a72c94da203cf51d3d6bc8d Jan 07 09:49:32 crc kubenswrapper[5131]: W0107 09:49:32.691431 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-b5a4a4f41735345c27d5d20aeebd762d5863f6f17155af1a8bd532bff3b4f8c4 WatchSource:0}: Error finding container b5a4a4f41735345c27d5d20aeebd762d5863f6f17155af1a8bd532bff3b4f8c4: Status 404 returned error can't find the container with id b5a4a4f41735345c27d5d20aeebd762d5863f6f17155af1a8bd532bff3b4f8c4 Jan 07 09:49:32 crc kubenswrapper[5131]: W0107 09:49:32.699092 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-ae53089fab60fc301c1d9300b6851b0dcfe12a00653bc8cda59246e97ee10142 WatchSource:0}: Error finding container ae53089fab60fc301c1d9300b6851b0dcfe12a00653bc8cda59246e97ee10142: Status 404 returned error can't find the container with id ae53089fab60fc301c1d9300b6851b0dcfe12a00653bc8cda59246e97ee10142 Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.701166 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="800ms" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.934056 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.935173 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.935211 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.935221 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:32 crc kubenswrapper[5131]: I0107 09:49:32.935241 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:49:32 crc kubenswrapper[5131]: E0107 09:49:32.935698 5131 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.077040 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.187514 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024"} Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.187609 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ae53089fab60fc301c1d9300b6851b0dcfe12a00653bc8cda59246e97ee10142"} Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.187788 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.188784 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.188818 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.188848 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:33 crc kubenswrapper[5131]: E0107 09:49:33.189027 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.190531 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b5a4a4f41735345c27d5d20aeebd762d5863f6f17155af1a8bd532bff3b4f8c4"} Jan 07 09:49:33 crc kubenswrapper[5131]: E0107 09:49:33.190673 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.195087 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b"} Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.195134 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"73f17549161096281aa1f08ec088e2b14a448b1e4a72c94da203cf51d3d6bc8d"} Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.195293 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.196013 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.196046 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.196059 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:33 crc kubenswrapper[5131]: E0107 09:49:33.196237 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.196688 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1b1e5b71e507398c99448834848504cf578d6838c60d756fdc3fc321af83db7e"} Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.198899 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"a7f968c7248c04fbdaccdd5986fce0d6426b7a9912acd67d254b56d30ac7411c"} Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.198999 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.199482 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.199507 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.199519 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:33 crc kubenswrapper[5131]: E0107 09:49:33.199678 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:33 crc kubenswrapper[5131]: E0107 09:49:33.403439 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 07 09:49:33 crc kubenswrapper[5131]: E0107 09:49:33.502541 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="1.6s" Jan 07 09:49:33 crc kubenswrapper[5131]: E0107 09:49:33.539197 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 07 09:49:33 crc kubenswrapper[5131]: E0107 09:49:33.691601 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.736532 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.737557 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.737598 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.737610 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:33 crc kubenswrapper[5131]: I0107 09:49:33.737635 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:49:33 crc kubenswrapper[5131]: E0107 09:49:33.738168 5131 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.076974 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.104358 5131 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 07 09:49:34 crc kubenswrapper[5131]: E0107 09:49:34.105986 5131 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.202029 5131 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b" exitCode=0 Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.202099 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b"} Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.202244 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.202711 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.202738 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.202750 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:34 crc kubenswrapper[5131]: E0107 09:49:34.202958 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.204007 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.204818 5131 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7" exitCode=0 Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.204900 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7"} Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.205015 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.205404 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.205435 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.205460 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.205953 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.205979 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.205990 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:34 crc kubenswrapper[5131]: E0107 09:49:34.206127 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.208558 5131 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020" exitCode=0 Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.208642 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020"} Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.208669 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"45412eb529aa08671402f5e439a2d0258d5e438466b13a1a3a8264e3eb9c8407"} Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.208793 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.209384 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.209426 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.209439 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:34 crc kubenswrapper[5131]: E0107 09:49:34.209666 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.212012 5131 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024" exitCode=0 Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.212086 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024"} Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.212181 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.212878 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.212908 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.212919 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:34 crc kubenswrapper[5131]: E0107 09:49:34.213106 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.215767 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f"} Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.215817 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c"} Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.215869 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb"} Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.215887 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7"} Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.216056 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.216599 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.216638 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.216655 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:34 crc kubenswrapper[5131]: E0107 09:49:34.217280 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:34 crc kubenswrapper[5131]: I0107 09:49:34.588036 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.221975 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"8ffbff8be21e181dfc3205fb877325fee8beefff7ba32e422a2619e1ab9772a9"} Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.222280 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"1e073b8c65df9f45d38018d244c88e515556561a3c3feb4a2cf3d270c77064b0"} Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.222302 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c8a53522c69210792aee2dce5b7e8e34b2cf22c24393e063a59b465373ab5096"} Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.222483 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.223422 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.223474 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.223498 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:35 crc kubenswrapper[5131]: E0107 09:49:35.223878 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.227792 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66"} Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.227866 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93"} Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.227886 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a"} Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.227904 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377"} Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.231005 5131 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7" exitCode=0 Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.231290 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.231875 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.232198 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7"} Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.233214 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.233253 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.233273 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:35 crc kubenswrapper[5131]: E0107 09:49:35.233759 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.234542 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.234592 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.234620 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:35 crc kubenswrapper[5131]: E0107 09:49:35.234952 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.338258 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.339170 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.339207 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.339217 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:35 crc kubenswrapper[5131]: I0107 09:49:35.339240 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.240122 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1cf234b3b691b05282272dd6550724fea27e87f7add44f735186741bad0ff89c"} Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.240344 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.241996 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.242049 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.242067 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:36 crc kubenswrapper[5131]: E0107 09:49:36.242522 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.243807 5131 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f" exitCode=0 Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.244063 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.244084 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f"} Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.244066 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.244727 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.244778 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.244803 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.245060 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.245169 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.245253 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:36 crc kubenswrapper[5131]: E0107 09:49:36.245365 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:36 crc kubenswrapper[5131]: E0107 09:49:36.245680 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:36 crc kubenswrapper[5131]: I0107 09:49:36.860950 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.251387 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"8a7484877b854cc26fd09edc6fd5c32934c1dffbbe432bfe7aff19ab695ef69a"} Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.251806 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"bcc1f440c98d635bb4817103fd1d9a17926b7a874f95ff484233a874c8eadeb2"} Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.251598 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.251869 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"6d3a5a8dfdfcd032674d1e587d9cbd4f65ba801617ba61300364dac7a766bcc9"} Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.252463 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.252761 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.252793 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.252807 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:37 crc kubenswrapper[5131]: E0107 09:49:37.253253 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.564188 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.564385 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.565398 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.565441 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.565457 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:37 crc kubenswrapper[5131]: E0107 09:49:37.565874 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.588735 5131 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 07 09:49:37 crc kubenswrapper[5131]: I0107 09:49:37.588801 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 07 09:49:38 crc kubenswrapper[5131]: I0107 09:49:38.152112 5131 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 07 09:49:38 crc kubenswrapper[5131]: I0107 09:49:38.261196 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"bbdfb2d1ed90a5108986f54b916f1abbd45a3bae0271525826521f154c84eb84"} Jan 07 09:49:38 crc kubenswrapper[5131]: I0107 09:49:38.261269 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"75bb0f73ec339c2c6734121cc7d17e1fc680fd5202133c971e39ab46778e5714"} Jan 07 09:49:38 crc kubenswrapper[5131]: I0107 09:49:38.261420 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:38 crc kubenswrapper[5131]: I0107 09:49:38.261421 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:38 crc kubenswrapper[5131]: I0107 09:49:38.262385 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:38 crc kubenswrapper[5131]: I0107 09:49:38.262436 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:38 crc kubenswrapper[5131]: I0107 09:49:38.262460 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:38 crc kubenswrapper[5131]: I0107 09:49:38.262387 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:38 crc kubenswrapper[5131]: I0107 09:49:38.262555 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:38 crc kubenswrapper[5131]: I0107 09:49:38.262583 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:38 crc kubenswrapper[5131]: E0107 09:49:38.262914 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:38 crc kubenswrapper[5131]: E0107 09:49:38.263266 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:39 crc kubenswrapper[5131]: I0107 09:49:39.210817 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 07 09:49:39 crc kubenswrapper[5131]: I0107 09:49:39.264028 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:39 crc kubenswrapper[5131]: I0107 09:49:39.265175 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:39 crc kubenswrapper[5131]: I0107 09:49:39.265274 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:39 crc kubenswrapper[5131]: I0107 09:49:39.265317 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:39 crc kubenswrapper[5131]: E0107 09:49:39.266103 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:39 crc kubenswrapper[5131]: I0107 09:49:39.390971 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 07 09:49:39 crc kubenswrapper[5131]: I0107 09:49:39.992295 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:39 crc kubenswrapper[5131]: I0107 09:49:39.992642 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:39 crc kubenswrapper[5131]: I0107 09:49:39.993830 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:39 crc kubenswrapper[5131]: I0107 09:49:39.993917 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:39 crc kubenswrapper[5131]: I0107 09:49:39.993943 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:39 crc kubenswrapper[5131]: E0107 09:49:39.994522 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:40 crc kubenswrapper[5131]: I0107 09:49:40.267253 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:40 crc kubenswrapper[5131]: I0107 09:49:40.268271 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:40 crc kubenswrapper[5131]: I0107 09:49:40.268360 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:40 crc kubenswrapper[5131]: I0107 09:49:40.268384 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:40 crc kubenswrapper[5131]: E0107 09:49:40.269368 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:41 crc kubenswrapper[5131]: I0107 09:49:41.269494 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:41 crc kubenswrapper[5131]: I0107 09:49:41.270270 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:41 crc kubenswrapper[5131]: I0107 09:49:41.270307 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:41 crc kubenswrapper[5131]: I0107 09:49:41.270316 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:41 crc kubenswrapper[5131]: E0107 09:49:41.270699 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:41 crc kubenswrapper[5131]: I0107 09:49:41.378254 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:41 crc kubenswrapper[5131]: I0107 09:49:41.378558 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:41 crc kubenswrapper[5131]: I0107 09:49:41.379795 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:41 crc kubenswrapper[5131]: I0107 09:49:41.379907 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:41 crc kubenswrapper[5131]: I0107 09:49:41.379931 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:41 crc kubenswrapper[5131]: E0107 09:49:41.380437 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:42 crc kubenswrapper[5131]: E0107 09:49:42.231617 5131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 07 09:49:42 crc kubenswrapper[5131]: I0107 09:49:42.304729 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:49:42 crc kubenswrapper[5131]: I0107 09:49:42.305017 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:42 crc kubenswrapper[5131]: I0107 09:49:42.305814 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:42 crc kubenswrapper[5131]: I0107 09:49:42.305946 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:42 crc kubenswrapper[5131]: I0107 09:49:42.305974 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:42 crc kubenswrapper[5131]: E0107 09:49:42.306546 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:43 crc kubenswrapper[5131]: I0107 09:49:43.715672 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:43 crc kubenswrapper[5131]: I0107 09:49:43.716074 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:43 crc kubenswrapper[5131]: I0107 09:49:43.717511 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:43 crc kubenswrapper[5131]: I0107 09:49:43.717604 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:43 crc kubenswrapper[5131]: I0107 09:49:43.717631 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:43 crc kubenswrapper[5131]: E0107 09:49:43.718524 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:43 crc kubenswrapper[5131]: I0107 09:49:43.725621 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:44 crc kubenswrapper[5131]: I0107 09:49:44.277410 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:44 crc kubenswrapper[5131]: I0107 09:49:44.278733 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:44 crc kubenswrapper[5131]: I0107 09:49:44.279256 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:44 crc kubenswrapper[5131]: I0107 09:49:44.279458 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:44 crc kubenswrapper[5131]: E0107 09:49:44.280230 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:44 crc kubenswrapper[5131]: I0107 09:49:44.285164 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:45 crc kubenswrapper[5131]: I0107 09:49:45.076731 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 07 09:49:45 crc kubenswrapper[5131]: E0107 09:49:45.104558 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 07 09:49:45 crc kubenswrapper[5131]: I0107 09:49:45.279712 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:45 crc kubenswrapper[5131]: I0107 09:49:45.280276 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:45 crc kubenswrapper[5131]: I0107 09:49:45.280327 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:45 crc kubenswrapper[5131]: I0107 09:49:45.280344 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:45 crc kubenswrapper[5131]: E0107 09:49:45.280697 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:45 crc kubenswrapper[5131]: E0107 09:49:45.340328 5131 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 07 09:49:45 crc kubenswrapper[5131]: I0107 09:49:45.605713 5131 trace.go:236] Trace[1986075301]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Jan-2026 09:49:35.604) (total time: 10001ms): Jan 07 09:49:45 crc kubenswrapper[5131]: Trace[1986075301]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:49:45.605) Jan 07 09:49:45 crc kubenswrapper[5131]: Trace[1986075301]: [10.001503965s] [10.001503965s] END Jan 07 09:49:45 crc kubenswrapper[5131]: E0107 09:49:45.605751 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 07 09:49:45 crc kubenswrapper[5131]: I0107 09:49:45.837049 5131 trace.go:236] Trace[869867895]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Jan-2026 09:49:35.834) (total time: 10002ms): Jan 07 09:49:45 crc kubenswrapper[5131]: Trace[869867895]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (09:49:45.836) Jan 07 09:49:45 crc kubenswrapper[5131]: Trace[869867895]: [10.002085517s] [10.002085517s] END Jan 07 09:49:45 crc kubenswrapper[5131]: E0107 09:49:45.837110 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 07 09:49:45 crc kubenswrapper[5131]: I0107 09:49:45.897210 5131 trace.go:236] Trace[1577316331]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Jan-2026 09:49:35.895) (total time: 10001ms): Jan 07 09:49:45 crc kubenswrapper[5131]: Trace[1577316331]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:49:45.897) Jan 07 09:49:45 crc kubenswrapper[5131]: Trace[1577316331]: [10.001432977s] [10.001432977s] END Jan 07 09:49:45 crc kubenswrapper[5131]: E0107 09:49:45.897302 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 07 09:49:45 crc kubenswrapper[5131]: I0107 09:49:45.945894 5131 trace.go:236] Trace[1818356826]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Jan-2026 09:49:35.943) (total time: 10001ms): Jan 07 09:49:45 crc kubenswrapper[5131]: Trace[1818356826]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:49:45.945) Jan 07 09:49:45 crc kubenswrapper[5131]: Trace[1818356826]: [10.001848263s] [10.001848263s] END Jan 07 09:49:45 crc kubenswrapper[5131]: E0107 09:49:45.945951 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 07 09:49:46 crc kubenswrapper[5131]: I0107 09:49:46.099705 5131 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 07 09:49:46 crc kubenswrapper[5131]: I0107 09:49:46.099785 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 07 09:49:46 crc kubenswrapper[5131]: I0107 09:49:46.106546 5131 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 07 09:49:46 crc kubenswrapper[5131]: I0107 09:49:46.106620 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 07 09:49:46 crc kubenswrapper[5131]: I0107 09:49:46.871926 5131 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]log ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]etcd ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/generic-apiserver-start-informers ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/priority-and-fairness-filter ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/start-apiextensions-informers ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/start-apiextensions-controllers ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/crd-informer-synced ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/start-system-namespaces-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 07 09:49:46 crc kubenswrapper[5131]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 07 09:49:46 crc kubenswrapper[5131]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/bootstrap-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/start-kube-aggregator-informers ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/apiservice-registration-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/apiservice-discovery-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]autoregister-completion ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/apiservice-openapi-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 07 09:49:46 crc kubenswrapper[5131]: livez check failed Jan 07 09:49:46 crc kubenswrapper[5131]: I0107 09:49:46.872112 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 09:49:47 crc kubenswrapper[5131]: I0107 09:49:47.589154 5131 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 07 09:49:47 crc kubenswrapper[5131]: I0107 09:49:47.589317 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 07 09:49:48 crc kubenswrapper[5131]: E0107 09:49:48.313292 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Jan 07 09:49:48 crc kubenswrapper[5131]: I0107 09:49:48.541464 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:48 crc kubenswrapper[5131]: I0107 09:49:48.542993 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:48 crc kubenswrapper[5131]: I0107 09:49:48.543062 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:48 crc kubenswrapper[5131]: I0107 09:49:48.543084 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:48 crc kubenswrapper[5131]: I0107 09:49:48.543119 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:49:48 crc kubenswrapper[5131]: E0107 09:49:48.558004 5131 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 07 09:49:49 crc kubenswrapper[5131]: I0107 09:49:49.236382 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 07 09:49:49 crc kubenswrapper[5131]: I0107 09:49:49.236740 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:49 crc kubenswrapper[5131]: I0107 09:49:49.237806 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:49 crc kubenswrapper[5131]: I0107 09:49:49.237891 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:49 crc kubenswrapper[5131]: I0107 09:49:49.237911 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:49 crc kubenswrapper[5131]: E0107 09:49:49.238630 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:49 crc kubenswrapper[5131]: I0107 09:49:49.251760 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 07 09:49:49 crc kubenswrapper[5131]: I0107 09:49:49.289362 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:49 crc kubenswrapper[5131]: I0107 09:49:49.289978 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:49 crc kubenswrapper[5131]: I0107 09:49:49.290021 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:49 crc kubenswrapper[5131]: I0107 09:49:49.290031 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:49 crc kubenswrapper[5131]: E0107 09:49:49.290432 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:49 crc kubenswrapper[5131]: E0107 09:49:49.369612 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 07 09:49:49 crc kubenswrapper[5131]: E0107 09:49:49.604493 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 07 09:49:50 crc kubenswrapper[5131]: E0107 09:49:50.064149 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.016422 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.103605 5131 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.108472 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.108495 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90aad597f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.089825663 +0000 UTC m=+0.256127257,LastTimestamp:2026-01-07 09:49:32.089825663 +0000 UTC m=+0.256127257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.111283 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f827c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170902536 +0000 UTC m=+0.337204140,LastTimestamp:2026-01-07 09:49:32.170902536 +0000 UTC m=+0.337204140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.117934 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f8365d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.17096239 +0000 UTC m=+0.337264004,LastTimestamp:2026-01-07 09:49:32.17096239 +0000 UTC m=+0.337264004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.123171 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f83c9a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170987943 +0000 UTC m=+0.337289537,LastTimestamp:2026-01-07 09:49:32.170987943 +0000 UTC m=+0.337289537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.130437 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f9130b56c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.230203078 +0000 UTC m=+0.396504652,LastTimestamp:2026-01-07 09:49:32.230203078 +0000 UTC m=+0.396504652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.138947 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f827c08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f827c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170902536 +0000 UTC m=+0.337204140,LastTimestamp:2026-01-07 09:49:32.280454091 +0000 UTC m=+0.446755655,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.148006 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f8365d6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f8365d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.17096239 +0000 UTC m=+0.337264004,LastTimestamp:2026-01-07 09:49:32.280467827 +0000 UTC m=+0.446769391,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.157169 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f83c9a7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f83c9a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170987943 +0000 UTC m=+0.337289537,LastTimestamp:2026-01-07 09:49:32.280477335 +0000 UTC m=+0.446778889,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.163235 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f827c08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f827c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170902536 +0000 UTC m=+0.337204140,LastTimestamp:2026-01-07 09:49:32.281403866 +0000 UTC m=+0.447705440,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.170370 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f8365d6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f8365d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.17096239 +0000 UTC m=+0.337264004,LastTimestamp:2026-01-07 09:49:32.281429579 +0000 UTC m=+0.447731153,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.177630 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f83c9a7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f83c9a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170987943 +0000 UTC m=+0.337289537,LastTimestamp:2026-01-07 09:49:32.281444585 +0000 UTC m=+0.447746159,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.183661 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f827c08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f827c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170902536 +0000 UTC m=+0.337204140,LastTimestamp:2026-01-07 09:49:32.281874629 +0000 UTC m=+0.448176193,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.194299 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f8365d6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f8365d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.17096239 +0000 UTC m=+0.337264004,LastTimestamp:2026-01-07 09:49:32.281889775 +0000 UTC m=+0.448191339,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.199698 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f83c9a7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f83c9a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170987943 +0000 UTC m=+0.337289537,LastTimestamp:2026-01-07 09:49:32.281897493 +0000 UTC m=+0.448199057,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.207275 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f827c08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f827c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170902536 +0000 UTC m=+0.337204140,LastTimestamp:2026-01-07 09:49:32.282943142 +0000 UTC m=+0.449244706,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.213296 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f8365d6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f8365d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.17096239 +0000 UTC m=+0.337264004,LastTimestamp:2026-01-07 09:49:32.282957228 +0000 UTC m=+0.449258792,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.216513 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f83c9a7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f83c9a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170987943 +0000 UTC m=+0.337289537,LastTimestamp:2026-01-07 09:49:32.282969435 +0000 UTC m=+0.449270999,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.221259 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f827c08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f827c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170902536 +0000 UTC m=+0.337204140,LastTimestamp:2026-01-07 09:49:32.283032388 +0000 UTC m=+0.449333962,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.222565 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f8365d6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f8365d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.17096239 +0000 UTC m=+0.337264004,LastTimestamp:2026-01-07 09:49:32.283052363 +0000 UTC m=+0.449353937,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.228247 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f83c9a7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f83c9a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170987943 +0000 UTC m=+0.337289537,LastTimestamp:2026-01-07 09:49:32.28306253 +0000 UTC m=+0.449364104,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.235806 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f827c08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f827c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170902536 +0000 UTC m=+0.337204140,LastTimestamp:2026-01-07 09:49:32.28432891 +0000 UTC m=+0.450630484,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.241780 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f8365d6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f8365d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.17096239 +0000 UTC m=+0.337264004,LastTimestamp:2026-01-07 09:49:32.284357712 +0000 UTC m=+0.450659286,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.248127 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f83c9a7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f83c9a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170987943 +0000 UTC m=+0.337289537,LastTimestamp:2026-01-07 09:49:32.284398411 +0000 UTC m=+0.450699985,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.254072 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f827c08\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f827c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.170902536 +0000 UTC m=+0.337204140,LastTimestamp:2026-01-07 09:49:32.284964629 +0000 UTC m=+0.451266203,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.260128 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188869f90f8365d6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188869f90f8365d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.17096239 +0000 UTC m=+0.337264004,LastTimestamp:2026-01-07 09:49:32.284988253 +0000 UTC m=+0.451289827,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.268277 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188869f92c5bc87b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.654905467 +0000 UTC m=+0.821207051,LastTimestamp:2026-01-07 09:49:32.654905467 +0000 UTC m=+0.821207051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.273681 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869f92e0a031a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.683100954 +0000 UTC m=+0.849402508,LastTimestamp:2026-01-07 09:49:32.683100954 +0000 UTC m=+0.849402508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.279102 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f92ea6281b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.693334043 +0000 UTC m=+0.859635607,LastTimestamp:2026-01-07 09:49:32.693334043 +0000 UTC m=+0.859635607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.284881 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f92edfb7a9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.697106345 +0000 UTC m=+0.863407939,LastTimestamp:2026-01-07 09:49:32.697106345 +0000 UTC m=+0.863407939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.290889 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f92f3768e1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:32.702853345 +0000 UTC m=+0.869154909,LastTimestamp:2026-01-07 09:49:32.702853345 +0000 UTC m=+0.869154909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.296862 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f94afff413 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.168981011 +0000 UTC m=+1.335282585,LastTimestamp:2026-01-07 09:49:33.168981011 +0000 UTC m=+1.335282585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.302563 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188869f94b014261 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.169066593 +0000 UTC m=+1.335368167,LastTimestamp:2026-01-07 09:49:33.169066593 +0000 UTC m=+1.335368167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.308478 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f94b0221d2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.169123794 +0000 UTC m=+1.335425358,LastTimestamp:2026-01-07 09:49:33.169123794 +0000 UTC m=+1.335425358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.314500 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f94b037778 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.169211256 +0000 UTC m=+1.335512860,LastTimestamp:2026-01-07 09:49:33.169211256 +0000 UTC m=+1.335512860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.324302 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869f94b3fae2f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.173157423 +0000 UTC m=+1.339458997,LastTimestamp:2026-01-07 09:49:33.173157423 +0000 UTC m=+1.339458997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.330521 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f94ba22ff2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.17961317 +0000 UTC m=+1.345914754,LastTimestamp:2026-01-07 09:49:33.17961317 +0000 UTC m=+1.345914754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.338604 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f94bb47cbf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.180812479 +0000 UTC m=+1.347114073,LastTimestamp:2026-01-07 09:49:33.180812479 +0000 UTC m=+1.347114073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.345600 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f94c2f0adf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.188844255 +0000 UTC m=+1.355145819,LastTimestamp:2026-01-07 09:49:33.188844255 +0000 UTC m=+1.355145819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.356932 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f94c3b4ccc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.189647564 +0000 UTC m=+1.355949128,LastTimestamp:2026-01-07 09:49:33.189647564 +0000 UTC m=+1.355949128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.362165 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188869f94c5c1f27 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.191798567 +0000 UTC m=+1.358100141,LastTimestamp:2026-01-07 09:49:33.191798567 +0000 UTC m=+1.358100141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.370518 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188869f94cf1e51a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.201614106 +0000 UTC m=+1.367915670,LastTimestamp:2026-01-07 09:49:33.201614106 +0000 UTC m=+1.367915670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.376371 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869f94d09c098 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.203177624 +0000 UTC m=+1.369479198,LastTimestamp:2026-01-07 09:49:33.203177624 +0000 UTC m=+1.369479198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.383046 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f95f9aa6ee openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.514663662 +0000 UTC m=+1.680965236,LastTimestamp:2026-01-07 09:49:33.514663662 +0000 UTC m=+1.680965236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.389377 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188869f95fa747aa openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.515491242 +0000 UTC m=+1.681792826,LastTimestamp:2026-01-07 09:49:33.515491242 +0000 UTC m=+1.681792826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.396006 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188869f960949e2d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.531045421 +0000 UTC m=+1.697347015,LastTimestamp:2026-01-07 09:49:33.531045421 +0000 UTC m=+1.697347015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.402115 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f9609f5590 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.531747728 +0000 UTC m=+1.698049332,LastTimestamp:2026-01-07 09:49:33.531747728 +0000 UTC m=+1.698049332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.409575 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f960aea04f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.532749903 +0000 UTC m=+1.699051467,LastTimestamp:2026-01-07 09:49:33.532749903 +0000 UTC m=+1.699051467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.415250 5131 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36764->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.415356 5131 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36764->192.168.126.11:17697: read: connection reset by peer" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.415906 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f97a7fa341 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.965878081 +0000 UTC m=+2.132179645,LastTimestamp:2026-01-07 09:49:33.965878081 +0000 UTC m=+2.132179645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.426913 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f97b3bce8b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.978209931 +0000 UTC m=+2.144511525,LastTimestamp:2026-01-07 09:49:33.978209931 +0000 UTC m=+2.144511525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.432641 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f97b4d26a1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:33.979346593 +0000 UTC m=+2.145648187,LastTimestamp:2026-01-07 09:49:33.979346593 +0000 UTC m=+2.145648187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.436753 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f986942be8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.168550376 +0000 UTC m=+2.334851950,LastTimestamp:2026-01-07 09:49:34.168550376 +0000 UTC m=+2.334851950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.440999 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869f98734b019 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.179069977 +0000 UTC m=+2.345371541,LastTimestamp:2026-01-07 09:49:34.179069977 +0000 UTC m=+2.345371541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.445500 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f988af31e3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.203875811 +0000 UTC m=+2.370177375,LastTimestamp:2026-01-07 09:49:34.203875811 +0000 UTC m=+2.370177375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.452077 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869f988e3aa18 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.207314456 +0000 UTC m=+2.373616030,LastTimestamp:2026-01-07 09:49:34.207314456 +0000 UTC m=+2.373616030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.459106 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f9894ba1de openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.214128094 +0000 UTC m=+2.380429668,LastTimestamp:2026-01-07 09:49:34.214128094 +0000 UTC m=+2.380429668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.465404 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f995c5052a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.423409962 +0000 UTC m=+2.589711526,LastTimestamp:2026-01-07 09:49:34.423409962 +0000 UTC m=+2.589711526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.470129 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869f995ecf5c8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.426027464 +0000 UTC m=+2.592329028,LastTimestamp:2026-01-07 09:49:34.426027464 +0000 UTC m=+2.592329028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.475779 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f995f47540 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.426518848 +0000 UTC m=+2.592820412,LastTimestamp:2026-01-07 09:49:34.426518848 +0000 UTC m=+2.592820412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.482540 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869f9967476a2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.43490781 +0000 UTC m=+2.601209374,LastTimestamp:2026-01-07 09:49:34.43490781 +0000 UTC m=+2.601209374,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.488957 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f996853c44 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.43600698 +0000 UTC m=+2.602308544,LastTimestamp:2026-01-07 09:49:34.43600698 +0000 UTC m=+2.602308544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.493126 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f99686c003 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.436106243 +0000 UTC m=+2.602407837,LastTimestamp:2026-01-07 09:49:34.436106243 +0000 UTC m=+2.602407837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.498877 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f996915bca openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.436801482 +0000 UTC m=+2.603103046,LastTimestamp:2026-01-07 09:49:34.436801482 +0000 UTC m=+2.603103046,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.505027 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9969799e1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.437210593 +0000 UTC m=+2.603512197,LastTimestamp:2026-01-07 09:49:34.437210593 +0000 UTC m=+2.603512197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.512399 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f9a16d04b7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.618969271 +0000 UTC m=+2.785270835,LastTimestamp:2026-01-07 09:49:34.618969271 +0000 UTC m=+2.785270835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.524365 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9a192b8e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.621440229 +0000 UTC m=+2.787741793,LastTimestamp:2026-01-07 09:49:34.621440229 +0000 UTC m=+2.787741793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.529786 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f9a202c3ea openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.628783082 +0000 UTC m=+2.795084646,LastTimestamp:2026-01-07 09:49:34.628783082 +0000 UTC m=+2.795084646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.535896 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f9a20efefa openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.629584634 +0000 UTC m=+2.795886198,LastTimestamp:2026-01-07 09:49:34.629584634 +0000 UTC m=+2.795886198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.541420 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9a267c9d7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.635403735 +0000 UTC m=+2.801705309,LastTimestamp:2026-01-07 09:49:34.635403735 +0000 UTC m=+2.801705309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.547728 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9a2d31cb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.642437299 +0000 UTC m=+2.808738863,LastTimestamp:2026-01-07 09:49:34.642437299 +0000 UTC m=+2.808738863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.553578 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f9afc8ee81 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.859873921 +0000 UTC m=+3.026175485,LastTimestamp:2026-01-07 09:49:34.859873921 +0000 UTC m=+3.026175485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.560353 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188869f9b057da32 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.86924037 +0000 UTC m=+3.035541944,LastTimestamp:2026-01-07 09:49:34.86924037 +0000 UTC m=+3.035541944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.565745 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9b0d6e93a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.87756729 +0000 UTC m=+3.043868854,LastTimestamp:2026-01-07 09:49:34.87756729 +0000 UTC m=+3.043868854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.572184 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9b180be03 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.888697347 +0000 UTC m=+3.054998921,LastTimestamp:2026-01-07 09:49:34.888697347 +0000 UTC m=+3.054998921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.581536 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9b1914007 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:34.889779207 +0000 UTC m=+3.056080771,LastTimestamp:2026-01-07 09:49:34.889779207 +0000 UTC m=+3.056080771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.588598 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9bd5e3783 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.087761283 +0000 UTC m=+3.254062847,LastTimestamp:2026-01-07 09:49:35.087761283 +0000 UTC m=+3.254062847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.594599 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9be7c852d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.106524461 +0000 UTC m=+3.272826025,LastTimestamp:2026-01-07 09:49:35.106524461 +0000 UTC m=+3.272826025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.599262 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9be8d3427 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.107617831 +0000 UTC m=+3.273919385,LastTimestamp:2026-01-07 09:49:35.107617831 +0000 UTC m=+3.273919385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.603641 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869f9c63a8545 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.236416837 +0000 UTC m=+3.402718441,LastTimestamp:2026-01-07 09:49:35.236416837 +0000 UTC m=+3.402718441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.609491 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9cd78e63a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.357945402 +0000 UTC m=+3.524246966,LastTimestamp:2026-01-07 09:49:35.357945402 +0000 UTC m=+3.524246966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.615821 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9ce343e4d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.370223181 +0000 UTC m=+3.536524745,LastTimestamp:2026-01-07 09:49:35.370223181 +0000 UTC m=+3.536524745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.630149 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869f9d6387332 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.504716594 +0000 UTC m=+3.671018188,LastTimestamp:2026-01-07 09:49:35.504716594 +0000 UTC m=+3.671018188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.633167 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869f9d72a8620 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.520581152 +0000 UTC m=+3.686882746,LastTimestamp:2026-01-07 09:49:35.520581152 +0000 UTC m=+3.686882746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.635968 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa028359db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:36.247822811 +0000 UTC m=+4.414124415,LastTimestamp:2026-01-07 09:49:36.247822811 +0000 UTC m=+4.414124415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.641493 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa11c41d04 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:36.503725316 +0000 UTC m=+4.670026920,LastTimestamp:2026-01-07 09:49:36.503725316 +0000 UTC m=+4.670026920,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.647444 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa12824cc5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:36.516189381 +0000 UTC m=+4.682490985,LastTimestamp:2026-01-07 09:49:36.516189381 +0000 UTC m=+4.682490985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.654983 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa129a89cb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:36.517777867 +0000 UTC m=+4.684079441,LastTimestamp:2026-01-07 09:49:36.517777867 +0000 UTC m=+4.684079441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.661686 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa21cf50d1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:36.772894929 +0000 UTC m=+4.939196493,LastTimestamp:2026-01-07 09:49:36.772894929 +0000 UTC m=+4.939196493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.667488 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa22f62179 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:36.792215929 +0000 UTC m=+4.958517493,LastTimestamp:2026-01-07 09:49:36.792215929 +0000 UTC m=+4.958517493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.673790 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa23090cb8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:36.7934558 +0000 UTC m=+4.959757394,LastTimestamp:2026-01-07 09:49:36.7934558 +0000 UTC m=+4.959757394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.678602 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa31cea29e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.04128579 +0000 UTC m=+5.207587384,LastTimestamp:2026-01-07 09:49:37.04128579 +0000 UTC m=+5.207587384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.682692 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa32d52c3c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.058491452 +0000 UTC m=+5.224793046,LastTimestamp:2026-01-07 09:49:37.058491452 +0000 UTC m=+5.224793046,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.686518 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa32eae625 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.059915301 +0000 UTC m=+5.226216895,LastTimestamp:2026-01-07 09:49:37.059915301 +0000 UTC m=+5.226216895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.690524 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa43033fb7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.329946551 +0000 UTC m=+5.496248145,LastTimestamp:2026-01-07 09:49:37.329946551 +0000 UTC m=+5.496248145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.694074 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa43feea99 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.346439833 +0000 UTC m=+5.512741437,LastTimestamp:2026-01-07 09:49:37.346439833 +0000 UTC m=+5.512741437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.698373 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa4416621d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.347977757 +0000 UTC m=+5.514279361,LastTimestamp:2026-01-07 09:49:37.347977757 +0000 UTC m=+5.514279361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.703291 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 07 09:49:51 crc kubenswrapper[5131]: &Event{ObjectMeta:{kube-controller-manager-crc.188869fa5270b4f2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 07 09:49:51 crc kubenswrapper[5131]: body: Jan 07 09:49:51 crc kubenswrapper[5131]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.588778226 +0000 UTC m=+5.755079800,LastTimestamp:2026-01-07 09:49:37.588778226 +0000 UTC m=+5.755079800,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 07 09:49:51 crc kubenswrapper[5131]: > Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.709826 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869fa52721170 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.58886744 +0000 UTC m=+5.755169014,LastTimestamp:2026-01-07 09:49:37.58886744 +0000 UTC m=+5.755169014,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.714771 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa53e3a0ca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.613086922 +0000 UTC m=+5.779388496,LastTimestamp:2026-01-07 09:49:37.613086922 +0000 UTC m=+5.779388496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.719614 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188869fa54f964da openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.631290586 +0000 UTC m=+5.797592190,LastTimestamp:2026-01-07 09:49:37.631290586 +0000 UTC m=+5.797592190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.726899 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 07 09:49:51 crc kubenswrapper[5131]: &Event{ObjectMeta:{kube-apiserver-crc.188869fc4dbbf403 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 07 09:49:51 crc kubenswrapper[5131]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 07 09:49:51 crc kubenswrapper[5131]: Jan 07 09:49:51 crc kubenswrapper[5131]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:46.099758083 +0000 UTC m=+14.266059657,LastTimestamp:2026-01-07 09:49:46.099758083 +0000 UTC m=+14.266059657,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 07 09:49:51 crc kubenswrapper[5131]: > Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.731580 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fc4dbccdd5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:46.099813845 +0000 UTC m=+14.266115419,LastTimestamp:2026-01-07 09:49:46.099813845 +0000 UTC m=+14.266115419,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.735868 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869fc4dbbf403\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 07 09:49:51 crc kubenswrapper[5131]: &Event{ObjectMeta:{kube-apiserver-crc.188869fc4dbbf403 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 07 09:49:51 crc kubenswrapper[5131]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 07 09:49:51 crc kubenswrapper[5131]: Jan 07 09:49:51 crc kubenswrapper[5131]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:46.099758083 +0000 UTC m=+14.266059657,LastTimestamp:2026-01-07 09:49:46.106594571 +0000 UTC m=+14.272896145,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 07 09:49:51 crc kubenswrapper[5131]: > Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.739533 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869fc4dbccdd5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fc4dbccdd5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:46.099813845 +0000 UTC m=+14.266115419,LastTimestamp:2026-01-07 09:49:46.106644472 +0000 UTC m=+14.272946056,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.744571 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 07 09:49:51 crc kubenswrapper[5131]: &Event{ObjectMeta:{kube-apiserver-crc.188869fc7bc4845a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Jan 07 09:49:51 crc kubenswrapper[5131]: body: [+]ping ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]log ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]etcd ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/generic-apiserver-start-informers ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/priority-and-fairness-filter ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/start-apiextensions-informers ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/start-apiextensions-controllers ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/crd-informer-synced ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/start-system-namespaces-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 07 09:49:51 crc kubenswrapper[5131]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 07 09:49:51 crc kubenswrapper[5131]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/bootstrap-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/start-kube-aggregator-informers ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/apiservice-registration-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/apiservice-discovery-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]autoregister-completion ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/apiservice-openapi-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 07 09:49:51 crc kubenswrapper[5131]: livez check failed Jan 07 09:49:51 crc kubenswrapper[5131]: Jan 07 09:49:51 crc kubenswrapper[5131]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:46.872071258 +0000 UTC m=+15.038372852,LastTimestamp:2026-01-07 09:49:46.872071258 +0000 UTC m=+15.038372852,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 07 09:49:51 crc kubenswrapper[5131]: > Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.749584 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fc7bc5aa26 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:46.87214647 +0000 UTC m=+15.038448074,LastTimestamp:2026-01-07 09:49:46.87214647 +0000 UTC m=+15.038448074,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.753379 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.188869fa5270b4f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 07 09:49:51 crc kubenswrapper[5131]: &Event{ObjectMeta:{kube-controller-manager-crc.188869fa5270b4f2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 07 09:49:51 crc kubenswrapper[5131]: body: Jan 07 09:49:51 crc kubenswrapper[5131]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.588778226 +0000 UTC m=+5.755079800,LastTimestamp:2026-01-07 09:49:47.58925936 +0000 UTC m=+15.755560954,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 07 09:49:51 crc kubenswrapper[5131]: > Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.757413 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.188869fa52721170\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188869fa52721170 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:37.58886744 +0000 UTC m=+5.755169014,LastTimestamp:2026-01-07 09:49:47.589365572 +0000 UTC m=+15.755667176,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.761821 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 07 09:49:51 crc kubenswrapper[5131]: &Event{ObjectMeta:{kube-apiserver-crc.188869fd8a90fd86 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:36764->192.168.126.11:17697: read: connection reset by peer Jan 07 09:49:51 crc kubenswrapper[5131]: body: Jan 07 09:49:51 crc kubenswrapper[5131]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:51.415319942 +0000 UTC m=+19.581621556,LastTimestamp:2026-01-07 09:49:51.415319942 +0000 UTC m=+19.581621556,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 07 09:49:51 crc kubenswrapper[5131]: > Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.765464 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fd8a922c3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36764->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:51.415397434 +0000 UTC m=+19.581699038,LastTimestamp:2026-01-07 09:49:51.415397434 +0000 UTC m=+19.581699038,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.866092 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.866503 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.866769 5131 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.866826 5131 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.867633 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.867661 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.867670 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.867995 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:51 crc kubenswrapper[5131]: I0107 09:49:51.871997 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.874782 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 07 09:49:51 crc kubenswrapper[5131]: &Event{ObjectMeta:{kube-apiserver-crc.188869fda579ff45 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 07 09:49:51 crc kubenswrapper[5131]: body: Jan 07 09:49:51 crc kubenswrapper[5131]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:51.866797893 +0000 UTC m=+20.033099457,LastTimestamp:2026-01-07 09:49:51.866797893 +0000 UTC m=+20.033099457,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 07 09:49:51 crc kubenswrapper[5131]: > Jan 07 09:49:51 crc kubenswrapper[5131]: E0107 09:49:51.879352 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fda57b0c9d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:51.866866845 +0000 UTC m=+20.033168419,LastTimestamp:2026-01-07 09:49:51.866866845 +0000 UTC m=+20.033168419,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:52 crc kubenswrapper[5131]: I0107 09:49:52.009975 5131 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 07 09:49:52 crc kubenswrapper[5131]: I0107 09:49:52.010087 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 07 09:49:52 crc kubenswrapper[5131]: E0107 09:49:52.017207 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 07 09:49:52 crc kubenswrapper[5131]: &Event{ObjectMeta:{kube-apiserver-crc.188869fdae039cb6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 07 09:49:52 crc kubenswrapper[5131]: body: Jan 07 09:49:52 crc kubenswrapper[5131]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:52.010034358 +0000 UTC m=+20.176335962,LastTimestamp:2026-01-07 09:49:52.010034358 +0000 UTC m=+20.176335962,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 07 09:49:52 crc kubenswrapper[5131]: > Jan 07 09:49:52 crc kubenswrapper[5131]: E0107 09:49:52.022134 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fdae04ecee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:52.01012043 +0000 UTC m=+20.176422034,LastTimestamp:2026-01-07 09:49:52.01012043 +0000 UTC m=+20.176422034,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:52 crc kubenswrapper[5131]: I0107 09:49:52.127317 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:49:52 crc kubenswrapper[5131]: E0107 09:49:52.232105 5131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 07 09:49:52 crc kubenswrapper[5131]: I0107 09:49:52.298292 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 07 09:49:52 crc kubenswrapper[5131]: I0107 09:49:52.300164 5131 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="1cf234b3b691b05282272dd6550724fea27e87f7add44f735186741bad0ff89c" exitCode=255 Jan 07 09:49:52 crc kubenswrapper[5131]: I0107 09:49:52.300238 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"1cf234b3b691b05282272dd6550724fea27e87f7add44f735186741bad0ff89c"} Jan 07 09:49:52 crc kubenswrapper[5131]: I0107 09:49:52.300493 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:52 crc kubenswrapper[5131]: I0107 09:49:52.301106 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:52 crc kubenswrapper[5131]: I0107 09:49:52.301137 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:52 crc kubenswrapper[5131]: I0107 09:49:52.301147 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:52 crc kubenswrapper[5131]: E0107 09:49:52.301505 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:52 crc kubenswrapper[5131]: I0107 09:49:52.301819 5131 scope.go:117] "RemoveContainer" containerID="1cf234b3b691b05282272dd6550724fea27e87f7add44f735186741bad0ff89c" Jan 07 09:49:52 crc kubenswrapper[5131]: E0107 09:49:52.313571 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869f9be8d3427\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9be8d3427 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.107617831 +0000 UTC m=+3.273919385,LastTimestamp:2026-01-07 09:49:52.303391068 +0000 UTC m=+20.469692642,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:52 crc kubenswrapper[5131]: E0107 09:49:52.599510 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869f9cd78e63a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9cd78e63a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.357945402 +0000 UTC m=+3.524246966,LastTimestamp:2026-01-07 09:49:52.595517338 +0000 UTC m=+20.761818902,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:52 crc kubenswrapper[5131]: E0107 09:49:52.614298 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869f9ce343e4d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9ce343e4d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.370223181 +0000 UTC m=+3.536524745,LastTimestamp:2026-01-07 09:49:52.604879398 +0000 UTC m=+20.771180962,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:53 crc kubenswrapper[5131]: I0107 09:49:53.084899 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:49:53 crc kubenswrapper[5131]: I0107 09:49:53.305108 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 07 09:49:53 crc kubenswrapper[5131]: I0107 09:49:53.306892 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0fdbd28a39894c4e64b425dec71e9df503f960370458c202fa36c3317d0c0714"} Jan 07 09:49:53 crc kubenswrapper[5131]: I0107 09:49:53.307222 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:53 crc kubenswrapper[5131]: I0107 09:49:53.307933 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:53 crc kubenswrapper[5131]: I0107 09:49:53.308008 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:53 crc kubenswrapper[5131]: I0107 09:49:53.308026 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:53 crc kubenswrapper[5131]: E0107 09:49:53.308509 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.080523 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.312607 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.313958 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.315931 5131 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0fdbd28a39894c4e64b425dec71e9df503f960370458c202fa36c3317d0c0714" exitCode=255 Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.316000 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"0fdbd28a39894c4e64b425dec71e9df503f960370458c202fa36c3317d0c0714"} Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.316041 5131 scope.go:117] "RemoveContainer" containerID="1cf234b3b691b05282272dd6550724fea27e87f7add44f735186741bad0ff89c" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.316271 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.316969 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.317008 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.317020 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:54 crc kubenswrapper[5131]: E0107 09:49:54.317399 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.317678 5131 scope.go:117] "RemoveContainer" containerID="0fdbd28a39894c4e64b425dec71e9df503f960370458c202fa36c3317d0c0714" Jan 07 09:49:54 crc kubenswrapper[5131]: E0107 09:49:54.317959 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 07 09:49:54 crc kubenswrapper[5131]: E0107 09:49:54.324574 5131 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fe379340b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:54.31792658 +0000 UTC m=+22.484228144,LastTimestamp:2026-01-07 09:49:54.31792658 +0000 UTC m=+22.484228144,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.597110 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.597321 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.598150 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.598188 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.598201 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:54 crc kubenswrapper[5131]: E0107 09:49:54.598550 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.603973 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:49:54 crc kubenswrapper[5131]: E0107 09:49:54.720114 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.958158 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.959449 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.959501 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.959519 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:54 crc kubenswrapper[5131]: I0107 09:49:54.959553 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:49:54 crc kubenswrapper[5131]: E0107 09:49:54.972961 5131 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 07 09:49:55 crc kubenswrapper[5131]: I0107 09:49:55.083359 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:49:55 crc kubenswrapper[5131]: I0107 09:49:55.320420 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 07 09:49:55 crc kubenswrapper[5131]: I0107 09:49:55.322761 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:55 crc kubenswrapper[5131]: I0107 09:49:55.322761 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:49:55 crc kubenswrapper[5131]: I0107 09:49:55.323698 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:55 crc kubenswrapper[5131]: I0107 09:49:55.323756 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:55 crc kubenswrapper[5131]: I0107 09:49:55.323722 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:49:55 crc kubenswrapper[5131]: I0107 09:49:55.323782 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:55 crc kubenswrapper[5131]: I0107 09:49:55.323818 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:49:55 crc kubenswrapper[5131]: I0107 09:49:55.323888 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:49:55 crc kubenswrapper[5131]: E0107 09:49:55.324358 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:55 crc kubenswrapper[5131]: E0107 09:49:55.324757 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:49:55 crc kubenswrapper[5131]: I0107 09:49:55.325183 5131 scope.go:117] "RemoveContainer" containerID="0fdbd28a39894c4e64b425dec71e9df503f960370458c202fa36c3317d0c0714" Jan 07 09:49:55 crc kubenswrapper[5131]: E0107 09:49:55.325591 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 07 09:49:55 crc kubenswrapper[5131]: E0107 09:49:55.333393 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869fe379340b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fe379340b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:54.31792658 +0000 UTC m=+22.484228144,LastTimestamp:2026-01-07 09:49:55.325525759 +0000 UTC m=+23.491827353,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:49:56 crc kubenswrapper[5131]: I0107 09:49:56.084600 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:49:57 crc kubenswrapper[5131]: I0107 09:49:57.081660 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:49:58 crc kubenswrapper[5131]: I0107 09:49:58.078762 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:49:58 crc kubenswrapper[5131]: E0107 09:49:58.094550 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 07 09:49:59 crc kubenswrapper[5131]: I0107 09:49:59.083217 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:49:59 crc kubenswrapper[5131]: E0107 09:49:59.180280 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 07 09:50:00 crc kubenswrapper[5131]: I0107 09:50:00.084114 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:00 crc kubenswrapper[5131]: E0107 09:50:00.090609 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 07 09:50:01 crc kubenswrapper[5131]: E0107 09:50:01.061265 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 07 09:50:01 crc kubenswrapper[5131]: I0107 09:50:01.081881 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:01 crc kubenswrapper[5131]: E0107 09:50:01.727941 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 07 09:50:01 crc kubenswrapper[5131]: I0107 09:50:01.973328 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:01 crc kubenswrapper[5131]: I0107 09:50:01.975091 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:01 crc kubenswrapper[5131]: I0107 09:50:01.975165 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:01 crc kubenswrapper[5131]: I0107 09:50:01.975193 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:01 crc kubenswrapper[5131]: I0107 09:50:01.975247 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:50:01 crc kubenswrapper[5131]: E0107 09:50:01.989613 5131 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 07 09:50:02 crc kubenswrapper[5131]: I0107 09:50:02.009989 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:50:02 crc kubenswrapper[5131]: I0107 09:50:02.010347 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:02 crc kubenswrapper[5131]: I0107 09:50:02.011896 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:02 crc kubenswrapper[5131]: I0107 09:50:02.011944 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:02 crc kubenswrapper[5131]: I0107 09:50:02.011963 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:02 crc kubenswrapper[5131]: E0107 09:50:02.012546 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:50:02 crc kubenswrapper[5131]: I0107 09:50:02.013013 5131 scope.go:117] "RemoveContainer" containerID="0fdbd28a39894c4e64b425dec71e9df503f960370458c202fa36c3317d0c0714" Jan 07 09:50:02 crc kubenswrapper[5131]: E0107 09:50:02.013327 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 07 09:50:02 crc kubenswrapper[5131]: E0107 09:50:02.015649 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869fe379340b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fe379340b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:54.31792658 +0000 UTC m=+22.484228144,LastTimestamp:2026-01-07 09:50:02.013277894 +0000 UTC m=+30.179579488,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:50:02 crc kubenswrapper[5131]: I0107 09:50:02.084106 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:02 crc kubenswrapper[5131]: E0107 09:50:02.233658 5131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 07 09:50:03 crc kubenswrapper[5131]: I0107 09:50:03.083348 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:03 crc kubenswrapper[5131]: I0107 09:50:03.308258 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:50:03 crc kubenswrapper[5131]: I0107 09:50:03.308635 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:03 crc kubenswrapper[5131]: I0107 09:50:03.310113 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:03 crc kubenswrapper[5131]: I0107 09:50:03.310208 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:03 crc kubenswrapper[5131]: I0107 09:50:03.310238 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:03 crc kubenswrapper[5131]: E0107 09:50:03.311093 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:50:03 crc kubenswrapper[5131]: I0107 09:50:03.311642 5131 scope.go:117] "RemoveContainer" containerID="0fdbd28a39894c4e64b425dec71e9df503f960370458c202fa36c3317d0c0714" Jan 07 09:50:03 crc kubenswrapper[5131]: E0107 09:50:03.312172 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 07 09:50:03 crc kubenswrapper[5131]: E0107 09:50:03.321164 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869fe379340b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fe379340b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:54.31792658 +0000 UTC m=+22.484228144,LastTimestamp:2026-01-07 09:50:03.312095661 +0000 UTC m=+31.478397265,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:50:04 crc kubenswrapper[5131]: I0107 09:50:04.083191 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:05 crc kubenswrapper[5131]: I0107 09:50:05.084529 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:06 crc kubenswrapper[5131]: I0107 09:50:06.084381 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:07 crc kubenswrapper[5131]: I0107 09:50:07.083582 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:08 crc kubenswrapper[5131]: I0107 09:50:08.080705 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:08 crc kubenswrapper[5131]: E0107 09:50:08.736175 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 07 09:50:08 crc kubenswrapper[5131]: I0107 09:50:08.990415 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:08 crc kubenswrapper[5131]: I0107 09:50:08.992578 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:08 crc kubenswrapper[5131]: I0107 09:50:08.992648 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:08 crc kubenswrapper[5131]: I0107 09:50:08.992681 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:08 crc kubenswrapper[5131]: I0107 09:50:08.992730 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:50:09 crc kubenswrapper[5131]: E0107 09:50:09.009703 5131 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 07 09:50:09 crc kubenswrapper[5131]: I0107 09:50:09.085190 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:10 crc kubenswrapper[5131]: I0107 09:50:10.085412 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:11 crc kubenswrapper[5131]: I0107 09:50:11.084175 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:12 crc kubenswrapper[5131]: I0107 09:50:12.084543 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:12 crc kubenswrapper[5131]: E0107 09:50:12.234736 5131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 07 09:50:13 crc kubenswrapper[5131]: I0107 09:50:13.084686 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:13 crc kubenswrapper[5131]: E0107 09:50:13.879804 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 07 09:50:14 crc kubenswrapper[5131]: I0107 09:50:14.082013 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:15 crc kubenswrapper[5131]: I0107 09:50:15.083714 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:15 crc kubenswrapper[5131]: E0107 09:50:15.745524 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 07 09:50:16 crc kubenswrapper[5131]: I0107 09:50:16.010709 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:16 crc kubenswrapper[5131]: I0107 09:50:16.012224 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:16 crc kubenswrapper[5131]: I0107 09:50:16.012284 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:16 crc kubenswrapper[5131]: I0107 09:50:16.012306 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:16 crc kubenswrapper[5131]: I0107 09:50:16.012347 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:50:16 crc kubenswrapper[5131]: E0107 09:50:16.028129 5131 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 07 09:50:16 crc kubenswrapper[5131]: I0107 09:50:16.083741 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:17 crc kubenswrapper[5131]: I0107 09:50:17.083342 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:18 crc kubenswrapper[5131]: I0107 09:50:18.084744 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:18 crc kubenswrapper[5131]: I0107 09:50:18.179640 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:18 crc kubenswrapper[5131]: I0107 09:50:18.180874 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:18 crc kubenswrapper[5131]: I0107 09:50:18.180935 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:18 crc kubenswrapper[5131]: I0107 09:50:18.180956 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:18 crc kubenswrapper[5131]: E0107 09:50:18.181495 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:50:18 crc kubenswrapper[5131]: I0107 09:50:18.181987 5131 scope.go:117] "RemoveContainer" containerID="0fdbd28a39894c4e64b425dec71e9df503f960370458c202fa36c3317d0c0714" Jan 07 09:50:18 crc kubenswrapper[5131]: E0107 09:50:18.191917 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869f9be8d3427\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9be8d3427 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.107617831 +0000 UTC m=+3.273919385,LastTimestamp:2026-01-07 09:50:18.18346509 +0000 UTC m=+46.349766684,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:50:18 crc kubenswrapper[5131]: E0107 09:50:18.479990 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869f9cd78e63a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9cd78e63a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.357945402 +0000 UTC m=+3.524246966,LastTimestamp:2026-01-07 09:50:18.471653863 +0000 UTC m=+46.637955437,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:50:18 crc kubenswrapper[5131]: E0107 09:50:18.493410 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869f9ce343e4d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869f9ce343e4d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:35.370223181 +0000 UTC m=+3.536524745,LastTimestamp:2026-01-07 09:50:18.488768223 +0000 UTC m=+46.655069777,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:50:19 crc kubenswrapper[5131]: I0107 09:50:19.086998 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:19 crc kubenswrapper[5131]: E0107 09:50:19.397367 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 07 09:50:19 crc kubenswrapper[5131]: I0107 09:50:19.401448 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 07 09:50:19 crc kubenswrapper[5131]: I0107 09:50:19.402202 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 07 09:50:19 crc kubenswrapper[5131]: I0107 09:50:19.404794 5131 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33" exitCode=255 Jan 07 09:50:19 crc kubenswrapper[5131]: I0107 09:50:19.404857 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33"} Jan 07 09:50:19 crc kubenswrapper[5131]: I0107 09:50:19.404893 5131 scope.go:117] "RemoveContainer" containerID="0fdbd28a39894c4e64b425dec71e9df503f960370458c202fa36c3317d0c0714" Jan 07 09:50:19 crc kubenswrapper[5131]: I0107 09:50:19.405157 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:19 crc kubenswrapper[5131]: I0107 09:50:19.405990 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:19 crc kubenswrapper[5131]: I0107 09:50:19.406180 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:19 crc kubenswrapper[5131]: I0107 09:50:19.406214 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:19 crc kubenswrapper[5131]: E0107 09:50:19.407012 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:50:19 crc kubenswrapper[5131]: I0107 09:50:19.407784 5131 scope.go:117] "RemoveContainer" containerID="9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33" Jan 07 09:50:19 crc kubenswrapper[5131]: E0107 09:50:19.408325 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 07 09:50:19 crc kubenswrapper[5131]: E0107 09:50:19.423007 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869fe379340b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fe379340b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:54.31792658 +0000 UTC m=+22.484228144,LastTimestamp:2026-01-07 09:50:19.40825279 +0000 UTC m=+47.574554394,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:50:20 crc kubenswrapper[5131]: I0107 09:50:20.081489 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:20 crc kubenswrapper[5131]: I0107 09:50:20.411066 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 07 09:50:20 crc kubenswrapper[5131]: E0107 09:50:20.841438 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 07 09:50:21 crc kubenswrapper[5131]: I0107 09:50:21.084153 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.009413 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.009746 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.011519 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.011590 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.011620 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:22 crc kubenswrapper[5131]: E0107 09:50:22.012349 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.012811 5131 scope.go:117] "RemoveContainer" containerID="9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33" Jan 07 09:50:22 crc kubenswrapper[5131]: E0107 09:50:22.013254 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 07 09:50:22 crc kubenswrapper[5131]: E0107 09:50:22.021902 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869fe379340b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fe379340b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:54.31792658 +0000 UTC m=+22.484228144,LastTimestamp:2026-01-07 09:50:22.013197692 +0000 UTC m=+50.179499286,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.083390 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:22 crc kubenswrapper[5131]: E0107 09:50:22.235602 5131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.310888 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.311138 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.311954 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.311997 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:22 crc kubenswrapper[5131]: I0107 09:50:22.312012 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:22 crc kubenswrapper[5131]: E0107 09:50:22.312412 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:50:22 crc kubenswrapper[5131]: E0107 09:50:22.755721 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.028960 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.030526 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.030586 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.030612 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.030648 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:50:23 crc kubenswrapper[5131]: E0107 09:50:23.047595 5131 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.081268 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.308368 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.308654 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.309767 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.309904 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.309926 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:23 crc kubenswrapper[5131]: E0107 09:50:23.310623 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:50:23 crc kubenswrapper[5131]: I0107 09:50:23.311097 5131 scope.go:117] "RemoveContainer" containerID="9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33" Jan 07 09:50:23 crc kubenswrapper[5131]: E0107 09:50:23.311440 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 07 09:50:23 crc kubenswrapper[5131]: E0107 09:50:23.319612 5131 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188869fe379340b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188869fe379340b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:49:54.31792658 +0000 UTC m=+22.484228144,LastTimestamp:2026-01-07 09:50:23.311379734 +0000 UTC m=+51.477681328,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 07 09:50:24 crc kubenswrapper[5131]: I0107 09:50:24.084184 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:25 crc kubenswrapper[5131]: I0107 09:50:25.084130 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:25 crc kubenswrapper[5131]: E0107 09:50:25.309162 5131 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 07 09:50:26 crc kubenswrapper[5131]: I0107 09:50:26.085484 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:27 crc kubenswrapper[5131]: I0107 09:50:27.083129 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:28 crc kubenswrapper[5131]: I0107 09:50:28.091760 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:29 crc kubenswrapper[5131]: I0107 09:50:29.084980 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:29 crc kubenswrapper[5131]: E0107 09:50:29.764910 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 07 09:50:30 crc kubenswrapper[5131]: I0107 09:50:30.048175 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:30 crc kubenswrapper[5131]: I0107 09:50:30.049820 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:30 crc kubenswrapper[5131]: I0107 09:50:30.050111 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:30 crc kubenswrapper[5131]: I0107 09:50:30.050310 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:30 crc kubenswrapper[5131]: I0107 09:50:30.050570 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:50:30 crc kubenswrapper[5131]: E0107 09:50:30.065235 5131 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 07 09:50:30 crc kubenswrapper[5131]: I0107 09:50:30.083433 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:31 crc kubenswrapper[5131]: I0107 09:50:31.084481 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:32 crc kubenswrapper[5131]: I0107 09:50:32.083465 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:32 crc kubenswrapper[5131]: E0107 09:50:32.236765 5131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 07 09:50:33 crc kubenswrapper[5131]: I0107 09:50:33.082247 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:34 crc kubenswrapper[5131]: I0107 09:50:34.084921 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:35 crc kubenswrapper[5131]: I0107 09:50:35.084074 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:36 crc kubenswrapper[5131]: I0107 09:50:36.082639 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:36 crc kubenswrapper[5131]: E0107 09:50:36.773904 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 07 09:50:37 crc kubenswrapper[5131]: I0107 09:50:37.066022 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:37 crc kubenswrapper[5131]: I0107 09:50:37.066608 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:37 crc kubenswrapper[5131]: I0107 09:50:37.066633 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:37 crc kubenswrapper[5131]: I0107 09:50:37.066643 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:37 crc kubenswrapper[5131]: I0107 09:50:37.066662 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:50:37 crc kubenswrapper[5131]: E0107 09:50:37.082571 5131 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 07 09:50:37 crc kubenswrapper[5131]: I0107 09:50:37.082746 5131 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 07 09:50:37 crc kubenswrapper[5131]: I0107 09:50:37.314010 5131 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-4bsfn" Jan 07 09:50:37 crc kubenswrapper[5131]: I0107 09:50:37.321218 5131 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-4bsfn" Jan 07 09:50:37 crc kubenswrapper[5131]: I0107 09:50:37.376958 5131 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 07 09:50:37 crc kubenswrapper[5131]: I0107 09:50:37.974931 5131 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 07 09:50:38 crc kubenswrapper[5131]: I0107 09:50:38.179264 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:38 crc kubenswrapper[5131]: I0107 09:50:38.179953 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:38 crc kubenswrapper[5131]: I0107 09:50:38.179989 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:38 crc kubenswrapper[5131]: I0107 09:50:38.180002 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:38 crc kubenswrapper[5131]: E0107 09:50:38.180418 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:50:38 crc kubenswrapper[5131]: I0107 09:50:38.180688 5131 scope.go:117] "RemoveContainer" containerID="9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33" Jan 07 09:50:38 crc kubenswrapper[5131]: E0107 09:50:38.180915 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 07 09:50:38 crc kubenswrapper[5131]: I0107 09:50:38.323087 5131 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-06 09:45:37 +0000 UTC" deadline="2026-02-01 09:18:15.117256667 +0000 UTC" Jan 07 09:50:38 crc kubenswrapper[5131]: I0107 09:50:38.323164 5131 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="599h27m36.794103605s" Jan 07 09:50:42 crc kubenswrapper[5131]: E0107 09:50:42.237286 5131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.082665 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.083682 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.083810 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.083924 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.084100 5131 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.094992 5131 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.095308 5131 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.095399 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.098903 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.099002 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.099070 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.099139 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.099205 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:44Z","lastTransitionTime":"2026-01-07T09:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.117813 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.126677 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.126770 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.126828 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.126916 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.126974 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:44Z","lastTransitionTime":"2026-01-07T09:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.139369 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.147729 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.147849 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.147915 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.147975 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.148029 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:44Z","lastTransitionTime":"2026-01-07T09:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.161458 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.169182 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.169230 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.169248 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.169267 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:44 crc kubenswrapper[5131]: I0107 09:50:44.169284 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:44Z","lastTransitionTime":"2026-01-07T09:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.184945 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.185171 5131 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.185207 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.285947 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.386177 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.486934 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.587298 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.688073 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.789103 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.889224 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:44 crc kubenswrapper[5131]: E0107 09:50:44.989812 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:45 crc kubenswrapper[5131]: E0107 09:50:45.090956 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:45 crc kubenswrapper[5131]: E0107 09:50:45.191810 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:45 crc kubenswrapper[5131]: E0107 09:50:45.292869 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:45 crc kubenswrapper[5131]: E0107 09:50:45.393200 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:45 crc kubenswrapper[5131]: E0107 09:50:45.493576 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:45 crc kubenswrapper[5131]: E0107 09:50:45.594326 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:45 crc kubenswrapper[5131]: E0107 09:50:45.695405 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:45 crc kubenswrapper[5131]: E0107 09:50:45.796455 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:45 crc kubenswrapper[5131]: E0107 09:50:45.896745 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:45 crc kubenswrapper[5131]: E0107 09:50:45.997621 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:46 crc kubenswrapper[5131]: E0107 09:50:46.098536 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:46 crc kubenswrapper[5131]: E0107 09:50:46.199272 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:46 crc kubenswrapper[5131]: E0107 09:50:46.300454 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:46 crc kubenswrapper[5131]: E0107 09:50:46.401710 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:46 crc kubenswrapper[5131]: E0107 09:50:46.501826 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:46 crc kubenswrapper[5131]: E0107 09:50:46.602197 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:46 crc kubenswrapper[5131]: E0107 09:50:46.702830 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:46 crc kubenswrapper[5131]: E0107 09:50:46.803427 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:46 crc kubenswrapper[5131]: E0107 09:50:46.903750 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:47 crc kubenswrapper[5131]: E0107 09:50:47.003999 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:47 crc kubenswrapper[5131]: E0107 09:50:47.104647 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:47 crc kubenswrapper[5131]: E0107 09:50:47.204942 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:47 crc kubenswrapper[5131]: E0107 09:50:47.305380 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:47 crc kubenswrapper[5131]: E0107 09:50:47.406387 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:47 crc kubenswrapper[5131]: E0107 09:50:47.506667 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:47 crc kubenswrapper[5131]: E0107 09:50:47.607491 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:47 crc kubenswrapper[5131]: E0107 09:50:47.708065 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:47 crc kubenswrapper[5131]: E0107 09:50:47.808473 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:47 crc kubenswrapper[5131]: E0107 09:50:47.908889 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:48 crc kubenswrapper[5131]: E0107 09:50:48.009574 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:48 crc kubenswrapper[5131]: E0107 09:50:48.110013 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:48 crc kubenswrapper[5131]: I0107 09:50:48.180192 5131 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 07 09:50:48 crc kubenswrapper[5131]: I0107 09:50:48.181243 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:48 crc kubenswrapper[5131]: I0107 09:50:48.181287 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:48 crc kubenswrapper[5131]: I0107 09:50:48.181306 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:48 crc kubenswrapper[5131]: E0107 09:50:48.181819 5131 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 07 09:50:48 crc kubenswrapper[5131]: E0107 09:50:48.210681 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:48 crc kubenswrapper[5131]: E0107 09:50:48.311885 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:48 crc kubenswrapper[5131]: E0107 09:50:48.412396 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:48 crc kubenswrapper[5131]: E0107 09:50:48.512813 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:48 crc kubenswrapper[5131]: E0107 09:50:48.613973 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:48 crc kubenswrapper[5131]: E0107 09:50:48.715232 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:48 crc kubenswrapper[5131]: E0107 09:50:48.815698 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:48 crc kubenswrapper[5131]: E0107 09:50:48.916625 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:49 crc kubenswrapper[5131]: E0107 09:50:49.016869 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:49 crc kubenswrapper[5131]: E0107 09:50:49.117457 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:49 crc kubenswrapper[5131]: E0107 09:50:49.218561 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:49 crc kubenswrapper[5131]: E0107 09:50:49.319193 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:49 crc kubenswrapper[5131]: E0107 09:50:49.420243 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:49 crc kubenswrapper[5131]: E0107 09:50:49.521033 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:49 crc kubenswrapper[5131]: E0107 09:50:49.622138 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:49 crc kubenswrapper[5131]: E0107 09:50:49.722251 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:49 crc kubenswrapper[5131]: E0107 09:50:49.823170 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:49 crc kubenswrapper[5131]: E0107 09:50:49.924103 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.025091 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.125300 5131 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.126314 5131 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.180426 5131 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.193914 5131 scope.go:117] "RemoveContainer" containerID="9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.197923 5131 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.228120 5131 apiserver.go:52] "Watching apiserver" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.229149 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.229203 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.229221 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.229246 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.229264 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:50Z","lastTransitionTime":"2026-01-07T09:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.236074 5131 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.241456 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/machine-config-daemon-dvdrn","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-multus/network-metrics-daemon-5cj94","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-image-registry/node-ca-mrsjt","openshift-multus/multus-additional-cni-plugins-gbjvz","openshift-multus/multus-wcqw9","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4","openshift-dns/node-resolver-mb6rx","openshift-ovn-kubernetes/ovnkube-node-kpj7m"] Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.243313 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.246689 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.246938 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.247594 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.247697 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.248132 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.248146 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.248437 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.250245 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.251766 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.251897 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.252170 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.252325 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.254250 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.254344 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.254440 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.255075 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.267677 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.269779 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.280416 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.281535 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.281672 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.286166 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mrsjt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.288123 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.288875 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.289194 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.289276 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.293572 5131 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.294493 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.295224 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.295813 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.298503 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.298934 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.299118 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.299378 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.300079 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.300652 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.301081 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.301089 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.301121 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.302717 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.302715 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.303145 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.303426 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.304023 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.307160 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.307319 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mb6rx" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.310024 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.310372 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.311180 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.311308 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.311319 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.311511 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.311553 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.311889 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.312129 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.312745 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.312782 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.313339 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.315060 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.315682 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.315783 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.327176 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.333364 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.333421 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.333437 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.333457 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.333472 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:50Z","lastTransitionTime":"2026-01-07T09:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.338816 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.347000 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.359593 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a8b62c-1e16-4bf4-8a1a-7e21eea28a36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T09:50:19Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0107 09:50:18.874623 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0107 09:50:18.874740 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0107 09:50:18.875448 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194057718/tls.crt::/tmp/serving-cert-1194057718/tls.key\\\\\\\"\\\\nI0107 09:50:19.352672 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 09:50:19.355791 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 09:50:19.355824 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 09:50:19.355916 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 09:50:19.355934 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 09:50:19.362427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0107 09:50:19.362473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0107 09:50:19.362471 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 09:50:19.362482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 09:50:19.362512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 09:50:19.362519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 09:50:19.362527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 09:50:19.362533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 09:50:19.364774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T09:50:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.369743 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375497 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375537 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b188180-f777-4a12-845b-d19fd5853d85-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375560 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwlcw\" (UniqueName: \"kubernetes.io/projected/5b188180-f777-4a12-845b-d19fd5853d85-kube-api-access-xwlcw\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375583 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375600 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375618 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5b188180-f777-4a12-845b-d19fd5853d85-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375668 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375713 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-cni-dir\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375741 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375762 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3942e752-44ba-4678-8723-6cd778e60d73-mcd-auth-proxy-config\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375826 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-etc-kubernetes\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375884 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5b188180-f777-4a12-845b-d19fd5853d85-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375913 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375934 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g97xn\" (UniqueName: \"kubernetes.io/projected/3942e752-44ba-4678-8723-6cd778e60d73-kube-api-access-g97xn\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375956 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-var-lib-kubelet\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.375978 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b094e1e2-9ae5-4cf3-9cef-71c25224af2a-serviceca\") pod \"node-ca-mrsjt\" (UID: \"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\") " pod="openshift-image-registry/node-ca-mrsjt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376000 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-var-lib-cni-multus\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376021 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-run-multus-certs\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376041 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376065 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-system-cni-dir\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376086 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-cnibin\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376265 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-hostroot\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376283 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-conf-dir\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376302 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-os-release\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376323 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-cni-binary-copy\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376345 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376366 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3942e752-44ba-4678-8723-6cd778e60d73-proxy-tls\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376389 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376411 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376432 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-run-netns\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376451 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf4gw\" (UniqueName: \"kubernetes.io/projected/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-kube-api-access-pf4gw\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376507 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376534 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b094e1e2-9ae5-4cf3-9cef-71c25224af2a-host\") pod \"node-ca-mrsjt\" (UID: \"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\") " pod="openshift-image-registry/node-ca-mrsjt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376574 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-cnibin\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376598 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376641 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376626 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-system-cni-dir\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376736 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.376762 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.377156 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.377306 5131 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.377414 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:50.877350116 +0000 UTC m=+79.043651670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.377992 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3942e752-44ba-4678-8723-6cd778e60d73-rootfs\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.378037 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-socket-dir-parent\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.378062 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-run-k8s-cni-cncf-io\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.378113 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.378176 5131 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.378203 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-daemon-config\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.378359 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:50.878326761 +0000 UTC m=+79.044628335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.378501 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgbqt\" (UniqueName: \"kubernetes.io/projected/b094e1e2-9ae5-4cf3-9cef-71c25224af2a-kube-api-access-qgbqt\") pod \"node-ca-mrsjt\" (UID: \"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\") " pod="openshift-image-registry/node-ca-mrsjt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.378538 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-os-release\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.378620 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdv7z\" (UniqueName: \"kubernetes.io/projected/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-kube-api-access-tdv7z\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.378827 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-var-lib-cni-bin\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.378888 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.378877 5131 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.379110 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cj94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cj94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.382308 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.393341 5131 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.393561 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.393579 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.393589 5131 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.393781 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3942e752-44ba-4678-8723-6cd778e60d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvdrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.394000 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:50.893983035 +0000 UTC m=+79.060284599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.394618 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.395680 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.395820 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.396121 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.396284 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.396372 5131 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.397159 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:50.897144154 +0000 UTC m=+79.063445728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.397532 5131 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.400015 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.402661 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad935b69-bef7-46a2-a03a-367404c13329\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-n4kr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.403532 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.405069 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.409597 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-mb6rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e402924-308a-4d47-8bf8-24a147d5f8bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mb6rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.422027 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae48c9e0-ebbd-4c8e-9c54-f6b3ac967d34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.430033 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.435208 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.435233 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.435243 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.435257 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.435267 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:50Z","lastTransitionTime":"2026-01-07T09:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.438082 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.447687 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b188180-f777-4a12-845b-d19fd5853d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbjvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.455893 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.466272 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.476710 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479199 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479242 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479268 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479290 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479312 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479333 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479357 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479377 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479397 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479417 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479439 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479459 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479481 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479503 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479526 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479547 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479567 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479588 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479611 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479635 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479655 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479676 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.479697 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.480056 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.480274 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.480366 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.480643 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.480980 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.481177 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.481225 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.481317 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.481521 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.481790 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482020 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482354 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482425 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482495 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482504 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482614 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482735 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482770 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482800 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482830 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482882 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482913 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482928 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482944 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.482979 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483009 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483039 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483071 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483103 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483125 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483137 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483172 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483205 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483223 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483237 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483271 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483303 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483337 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483368 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483399 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483431 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483460 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483465 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483518 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483541 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483607 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483628 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483645 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483648 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483661 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483682 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483721 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483769 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483790 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483808 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483826 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483915 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483921 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.483932 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.484022 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.484014 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.484506 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.484771 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.484789 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.484972 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485045 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485067 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485095 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485139 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485140 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485175 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485194 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485309 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485333 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485353 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485443 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485574 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485570 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485414 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485757 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485776 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485798 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485816 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485843 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485862 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485877 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485896 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485907 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485918 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485941 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485960 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485975 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485993 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486009 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486025 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486044 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486059 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486075 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486091 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486108 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486126 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486142 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486159 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486176 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486194 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486213 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486228 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486247 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486263 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486279 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486295 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486312 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486328 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486346 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486361 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486378 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486397 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486414 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486430 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486446 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486464 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486480 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486496 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486515 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486538 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486557 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486577 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486596 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486618 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486919 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mrsjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgbqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mrsjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.485956 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486001 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486116 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486186 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486507 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486643 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486858 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.486884 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.487103 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.487201 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.487251 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.487439 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.487467 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.487813 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.487899 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.487998 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.488063 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.488088 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.488558 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.488608 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.488726 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.489151 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.489312 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.489555 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.490099 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.490372 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.490689 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.490739 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.490735 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.491201 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.491312 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.491438 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:50:50.991415566 +0000 UTC m=+79.157717130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.492283 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.492433 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.492741 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.492921 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.493094 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.493109 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.493427 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.493502 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.493583 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.493857 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.493927 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.494271 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.494385 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.494770 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.495158 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.495316 5131 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.495406 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.495506 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.495622 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.495672 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.495737 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496027 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496069 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496075 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496010 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496198 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496232 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496292 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496324 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496375 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496658 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496701 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496727 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496984 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497056 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497093 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497126 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497161 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497195 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497232 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497270 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497302 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497334 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497378 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497411 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497444 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497476 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497509 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497542 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497593 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497631 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497664 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497700 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497735 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497770 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497804 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497875 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497918 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497990 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498027 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498063 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498098 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498135 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498172 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498206 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498239 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498275 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498318 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498400 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498457 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498454 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-wcqw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf4gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wcqw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498496 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498720 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498749 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498779 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498806 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498873 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498905 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498929 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498958 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498986 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499009 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499035 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499062 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499088 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499116 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499138 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499162 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499185 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499207 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499248 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499273 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499296 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499318 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499341 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499364 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499387 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499409 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499430 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499451 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499472 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499494 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499517 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499541 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499564 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499587 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499612 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499636 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499664 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499688 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499713 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499737 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499763 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499789 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499813 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499859 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499881 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499906 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499935 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499963 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499992 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500018 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500047 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500074 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500099 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500171 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500198 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500230 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500256 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500282 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500307 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500331 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500356 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500380 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.496331 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.497092 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498171 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498483 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.498550 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499171 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.501287 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499725 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.499943 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500259 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500399 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500529 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500692 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500720 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.501383 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.500747 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.501008 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.501144 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.501313 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.501685 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.502158 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.502246 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.502460 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.502777 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.503134 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-os-release\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.503157 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.503206 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.503715 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.504092 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.504466 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-cni-binary-copy\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.505217 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.505218 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.505338 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.505378 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.504397 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.505659 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.506038 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.506208 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.506364 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.508589 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.503172 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-cni-binary-copy\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.508727 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3942e752-44ba-4678-8723-6cd778e60d73-proxy-tls\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.508799 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-run-netns\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.508870 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pf4gw\" (UniqueName: \"kubernetes.io/projected/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-kube-api-access-pf4gw\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.508935 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b094e1e2-9ae5-4cf3-9cef-71c25224af2a-host\") pod \"node-ca-mrsjt\" (UID: \"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\") " pod="openshift-image-registry/node-ca-mrsjt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.508971 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-cnibin\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509010 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-kubelet\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509044 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad935b69-bef7-46a2-a03a-367404c13329-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509091 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509126 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509180 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-system-cni-dir\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509220 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-systemd\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509262 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3942e752-44ba-4678-8723-6cd778e60d73-rootfs\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509296 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-etc-openvswitch\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509329 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-node-log\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509426 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-env-overrides\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509470 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-socket-dir-parent\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509510 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-run-k8s-cni-cncf-io\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509545 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1e402924-308a-4d47-8bf8-24a147d5f8bf-hosts-file\") pod \"node-resolver-mb6rx\" (UID: \"1e402924-308a-4d47-8bf8-24a147d5f8bf\") " pod="openshift-dns/node-resolver-mb6rx" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509578 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovn-node-metrics-cert\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509621 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509657 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-daemon-config\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509692 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-netns\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509728 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-config\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509760 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509796 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9czf\" (UniqueName: \"kubernetes.io/projected/ad935b69-bef7-46a2-a03a-367404c13329-kube-api-access-r9czf\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509931 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qgbqt\" (UniqueName: \"kubernetes.io/projected/b094e1e2-9ae5-4cf3-9cef-71c25224af2a-kube-api-access-qgbqt\") pod \"node-ca-mrsjt\" (UID: \"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\") " pod="openshift-image-registry/node-ca-mrsjt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.509974 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-os-release\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510014 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tdv7z\" (UniqueName: \"kubernetes.io/projected/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-kube-api-access-tdv7z\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510051 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-var-lib-cni-bin\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510087 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr9m5\" (UniqueName: \"kubernetes.io/projected/1e402924-308a-4d47-8bf8-24a147d5f8bf-kube-api-access-zr9m5\") pod \"node-resolver-mb6rx\" (UID: \"1e402924-308a-4d47-8bf8-24a147d5f8bf\") " pod="openshift-dns/node-resolver-mb6rx" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510128 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-slash\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510167 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-netd\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510202 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510239 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-script-lib\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510286 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b188180-f777-4a12-845b-d19fd5853d85-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510325 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xwlcw\" (UniqueName: \"kubernetes.io/projected/5b188180-f777-4a12-845b-d19fd5853d85-kube-api-access-xwlcw\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510392 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1e402924-308a-4d47-8bf8-24a147d5f8bf-tmp-dir\") pod \"node-resolver-mb6rx\" (UID: \"1e402924-308a-4d47-8bf8-24a147d5f8bf\") " pod="openshift-dns/node-resolver-mb6rx" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510436 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-var-lib-openvswitch\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510470 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78wtj\" (UniqueName: \"kubernetes.io/projected/592342ad-cf5f-4290-aa15-e99a6454cbf5-kube-api-access-78wtj\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510532 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5b188180-f777-4a12-845b-d19fd5853d85-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510595 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-cni-dir\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510668 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-openvswitch\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510720 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510761 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3942e752-44ba-4678-8723-6cd778e60d73-mcd-auth-proxy-config\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510799 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-etc-kubernetes\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510863 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-systemd-units\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510916 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-bin\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.510966 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5b188180-f777-4a12-845b-d19fd5853d85-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511002 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-log-socket\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511071 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g97xn\" (UniqueName: \"kubernetes.io/projected/3942e752-44ba-4678-8723-6cd778e60d73-kube-api-access-g97xn\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511111 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-var-lib-kubelet\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511144 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-ovn\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511199 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511243 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b094e1e2-9ae5-4cf3-9cef-71c25224af2a-serviceca\") pod \"node-ca-mrsjt\" (UID: \"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\") " pod="openshift-image-registry/node-ca-mrsjt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511280 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-var-lib-cni-multus\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511324 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-run-multus-certs\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511362 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511407 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-system-cni-dir\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511445 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-cnibin\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511506 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-hostroot\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511543 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-conf-dir\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511658 5131 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511688 5131 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511710 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511733 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511754 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511773 5131 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511793 5131 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511815 5131 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511870 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511898 5131 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511908 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511919 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511955 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511972 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511983 5131 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511996 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512010 5131 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512024 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512038 5131 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512051 5131 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512061 5131 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512072 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512082 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512092 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512103 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512113 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512124 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512136 5131 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512147 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512157 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512166 5131 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512176 5131 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512186 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512196 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512206 5131 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512216 5131 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512226 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512235 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512246 5131 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512283 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512294 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512304 5131 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512314 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512403 5131 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512415 5131 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512424 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512434 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512443 5131 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512453 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512462 5131 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512471 5131 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512480 5131 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512491 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512499 5131 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512509 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512518 5131 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512529 5131 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512538 5131 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512547 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512556 5131 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512565 5131 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512574 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512584 5131 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512593 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512603 5131 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512613 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512623 5131 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512645 5131 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512658 5131 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512671 5131 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512684 5131 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512694 5131 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512703 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512713 5131 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512727 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512785 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512797 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512806 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512816 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512897 5131 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512909 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512919 5131 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512908 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.512928 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513029 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513056 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513110 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513125 5131 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513138 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513142 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.511979 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-conf-dir\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513150 5131 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513184 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513198 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513209 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513222 5131 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513233 5131 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513244 5131 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513254 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513264 5131 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513274 5131 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513283 5131 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513294 5131 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513303 5131 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513314 5131 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513326 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513336 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513345 5131 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513355 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513365 5131 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513375 5131 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513428 5131 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513440 5131 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513452 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513461 5131 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513523 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-run-netns\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513600 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513615 5131 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513625 5131 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513636 5131 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513647 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513682 5131 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513694 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514303 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514320 5131 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514369 5131 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514380 5131 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514379 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-system-cni-dir\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514390 5131 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514408 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514432 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3942e752-44ba-4678-8723-6cd778e60d73-rootfs\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514502 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-socket-dir-parent\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514536 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-run-k8s-cni-cncf-io\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514661 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.515433 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-daemon-config\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513993 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b094e1e2-9ae5-4cf3-9cef-71c25224af2a-host\") pod \"node-ca-mrsjt\" (UID: \"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\") " pod="openshift-image-registry/node-ca-mrsjt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514057 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-cnibin\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514057 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.514139 5131 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.515852 5131 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.513915 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514533 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.514684 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.515878 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.515916 5131 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.515935 5131 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.515950 5131 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.515965 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.515997 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs podName:ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e nodeName:}" failed. No retries permitted until 2026-01-07 09:50:51.015980634 +0000 UTC m=+79.182282278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs") pod "network-metrics-daemon-5cj94" (UID: "ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.516502 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.516620 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.516843 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.516969 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.517108 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.516805 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592342ad-cf5f-4290-aa15-e99a6454cbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpj7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.518037 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.518058 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.518521 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b188180-f777-4a12-845b-d19fd5853d85-cni-binary-copy\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.518661 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.519067 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-os-release\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.519396 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-var-lib-cni-bin\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.519424 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.519696 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.519791 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.519961 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5b188180-f777-4a12-845b-d19fd5853d85-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.520124 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.520136 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.520253 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-var-lib-kubelet\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.520336 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-multus-cni-dir\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.520396 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.520558 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.521024 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3942e752-44ba-4678-8723-6cd778e60d73-mcd-auth-proxy-config\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.521075 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-etc-kubernetes\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.521567 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5b188180-f777-4a12-845b-d19fd5853d85-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.521669 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-var-lib-cni-multus\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.521696 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-host-run-multus-certs\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.521749 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-system-cni-dir\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.521952 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b188180-f777-4a12-845b-d19fd5853d85-cnibin\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.521629 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.515134 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.515545 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.515604 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.522128 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3942e752-44ba-4678-8723-6cd778e60d73-proxy-tls\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.522384 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-hostroot\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.522823 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.523150 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.523266 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.523429 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.523569 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.523700 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.523756 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.524231 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.524391 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b094e1e2-9ae5-4cf3-9cef-71c25224af2a-serviceca\") pod \"node-ca-mrsjt\" (UID: \"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\") " pod="openshift-image-registry/node-ca-mrsjt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.524715 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.524768 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.525532 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.525866 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.526013 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-os-release\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.526051 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.526541 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.526614 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.526939 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.527603 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.527976 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.528107 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.528186 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.528136 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.528443 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.529796 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e"} Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.530796 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.531452 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.533006 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.533051 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf4gw\" (UniqueName: \"kubernetes.io/projected/a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1-kube-api-access-pf4gw\") pod \"multus-wcqw9\" (UID: \"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\") " pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.535598 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.536372 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.536397 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.536405 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.536418 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.536427 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:50Z","lastTransitionTime":"2026-01-07T09:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.537172 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.537297 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.537541 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.537612 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgbqt\" (UniqueName: \"kubernetes.io/projected/b094e1e2-9ae5-4cf3-9cef-71c25224af2a-kube-api-access-qgbqt\") pod \"node-ca-mrsjt\" (UID: \"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\") " pod="openshift-image-registry/node-ca-mrsjt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.537951 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.538414 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.540793 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2562cbe-7a5f-44ee-ab23-4c3c8713b3c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8a53522c69210792aee2dce5b7e8e34b2cf22c24393e063a59b465373ab5096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e073b8c65df9f45d38018d244c88e515556561a3c3feb4a2cf3d270c77064b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ffbff8be21e181dfc3205fb877325fee8beefff7ba32e422a2619e1ab9772a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.541589 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.541779 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.541861 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdv7z\" (UniqueName: \"kubernetes.io/projected/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-kube-api-access-tdv7z\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.542253 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.542253 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.542235 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.542389 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.542572 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.542945 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.543061 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.543314 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.543448 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.543567 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.543611 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.543660 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.543736 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.544032 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.544038 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.544356 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.544520 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.544620 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.544783 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.544812 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.545083 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.545959 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.546121 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.546173 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.546563 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.546709 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.546852 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.546934 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.546957 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.547022 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.547112 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.547132 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.547602 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwlcw\" (UniqueName: \"kubernetes.io/projected/5b188180-f777-4a12-845b-d19fd5853d85-kube-api-access-xwlcw\") pod \"multus-additional-cni-plugins-gbjvz\" (UID: \"5b188180-f777-4a12-845b-d19fd5853d85\") " pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.548552 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.549230 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.551917 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g97xn\" (UniqueName: \"kubernetes.io/projected/3942e752-44ba-4678-8723-6cd778e60d73-kube-api-access-g97xn\") pod \"machine-config-daemon-dvdrn\" (UID: \"3942e752-44ba-4678-8723-6cd778e60d73\") " pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.553035 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.558772 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.562231 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.570351 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.570480 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.571083 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.573540 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b188180-f777-4a12-845b-d19fd5853d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbjvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.582591 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.583686 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.592693 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:50 crc kubenswrapper[5131]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 07 09:50:50 crc kubenswrapper[5131]: set -o allexport Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: source /etc/kubernetes/apiserver-url.env Jan 07 09:50:50 crc kubenswrapper[5131]: else Jan 07 09:50:50 crc kubenswrapper[5131]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 07 09:50:50 crc kubenswrapper[5131]: exit 1 Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 07 09:50:50 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:50 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: W0107 09:50:50.593379 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-c200305d2f766549ff3a535abf87ee1de170c4695210076a5039ad2bfee6d400 WatchSource:0}: Error finding container c200305d2f766549ff3a535abf87ee1de170c4695210076a5039ad2bfee6d400: Status 404 returned error can't find the container with id c200305d2f766549ff3a535abf87ee1de170c4695210076a5039ad2bfee6d400 Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.594167 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.595293 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.597124 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.598464 5131 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.598929 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.599174 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.602376 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.604537 5131 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.605529 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.612007 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mrsjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgbqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mrsjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: W0107 09:50:50.614158 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-bda7cbd117853b64f0a6a0358c9704f2d2e8bdd10db11d776f2c04ce8caf4936 WatchSource:0}: Error finding container bda7cbd117853b64f0a6a0358c9704f2d2e8bdd10db11d776f2c04ce8caf4936: Status 404 returned error can't find the container with id bda7cbd117853b64f0a6a0358c9704f2d2e8bdd10db11d776f2c04ce8caf4936 Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.616723 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mrsjt" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.616937 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-var-lib-openvswitch\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.616959 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-78wtj\" (UniqueName: \"kubernetes.io/projected/592342ad-cf5f-4290-aa15-e99a6454cbf5-kube-api-access-78wtj\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.616981 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-openvswitch\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617000 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-systemd-units\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617013 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-bin\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617028 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-log-socket\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617050 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-ovn\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617051 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-openvswitch\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617065 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617103 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-kubelet\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617121 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad935b69-bef7-46a2-a03a-367404c13329-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617144 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617161 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-systemd\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617175 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-etc-openvswitch\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617188 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-node-log\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617203 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-env-overrides\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617225 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1e402924-308a-4d47-8bf8-24a147d5f8bf-hosts-file\") pod \"node-resolver-mb6rx\" (UID: \"1e402924-308a-4d47-8bf8-24a147d5f8bf\") " pod="openshift-dns/node-resolver-mb6rx" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617248 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovn-node-metrics-cert\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617269 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-netns\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617284 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-config\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617303 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617318 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r9czf\" (UniqueName: \"kubernetes.io/projected/ad935b69-bef7-46a2-a03a-367404c13329-kube-api-access-r9czf\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617336 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zr9m5\" (UniqueName: \"kubernetes.io/projected/1e402924-308a-4d47-8bf8-24a147d5f8bf-kube-api-access-zr9m5\") pod \"node-resolver-mb6rx\" (UID: \"1e402924-308a-4d47-8bf8-24a147d5f8bf\") " pod="openshift-dns/node-resolver-mb6rx" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617352 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-slash\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617367 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-netd\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617382 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617396 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-script-lib\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617413 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1e402924-308a-4d47-8bf8-24a147d5f8bf-tmp-dir\") pod \"node-resolver-mb6rx\" (UID: \"1e402924-308a-4d47-8bf8-24a147d5f8bf\") " pod="openshift-dns/node-resolver-mb6rx" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617005 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-var-lib-openvswitch\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.617663 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:50 crc kubenswrapper[5131]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ -f "/env/_master" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: set -o allexport Jan 07 09:50:50 crc kubenswrapper[5131]: source "/env/_master" Jan 07 09:50:50 crc kubenswrapper[5131]: set +o allexport Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 07 09:50:50 crc kubenswrapper[5131]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 07 09:50:50 crc kubenswrapper[5131]: ho_enable="--enable-hybrid-overlay" Jan 07 09:50:50 crc kubenswrapper[5131]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 07 09:50:50 crc kubenswrapper[5131]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 07 09:50:50 crc kubenswrapper[5131]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 07 09:50:50 crc kubenswrapper[5131]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 07 09:50:50 crc kubenswrapper[5131]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 07 09:50:50 crc kubenswrapper[5131]: --webhook-host=127.0.0.1 \ Jan 07 09:50:50 crc kubenswrapper[5131]: --webhook-port=9743 \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${ho_enable} \ Jan 07 09:50:50 crc kubenswrapper[5131]: --enable-interconnect \ Jan 07 09:50:50 crc kubenswrapper[5131]: --disable-approver \ Jan 07 09:50:50 crc kubenswrapper[5131]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 07 09:50:50 crc kubenswrapper[5131]: --wait-for-kubernetes-api=200s \ Jan 07 09:50:50 crc kubenswrapper[5131]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 07 09:50:50 crc kubenswrapper[5131]: --loglevel="${LOGLEVEL}" Jan 07 09:50:50 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:50 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617721 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.617774 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-node-log\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618086 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1e402924-308a-4d47-8bf8-24a147d5f8bf-tmp-dir\") pod \"node-resolver-mb6rx\" (UID: \"1e402924-308a-4d47-8bf8-24a147d5f8bf\") " pod="openshift-dns/node-resolver-mb6rx" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618123 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-systemd-units\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618148 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-bin\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618167 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-log-socket\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618190 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-ovn\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618362 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-slash\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618391 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-netd\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618415 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618686 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618712 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-kubelet\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618785 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1e402924-308a-4d47-8bf8-24a147d5f8bf-hosts-file\") pod \"node-resolver-mb6rx\" (UID: \"1e402924-308a-4d47-8bf8-24a147d5f8bf\") " pod="openshift-dns/node-resolver-mb6rx" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618825 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-etc-openvswitch\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.618979 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-netns\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619264 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619283 5131 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619292 5131 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619305 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619313 5131 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619324 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619334 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619342 5131 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619351 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619359 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619368 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619376 5131 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619385 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619408 5131 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619417 5131 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619426 5131 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619434 5131 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619442 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619451 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619460 5131 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619469 5131 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619478 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619472 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-config\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619554 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619571 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619582 5131 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619595 5131 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619608 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619620 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619630 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619641 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619651 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619662 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619673 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619684 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619695 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619706 5131 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619717 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619728 5131 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619739 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619750 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619764 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619775 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619786 5131 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619796 5131 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619807 5131 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619819 5131 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619849 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619861 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619871 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619884 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619897 5131 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619911 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619922 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619935 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619947 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619958 5131 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619969 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619980 5131 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.619991 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620005 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620017 5131 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620028 5131 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620033 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620040 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620052 5131 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620063 5131 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620074 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620084 5131 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620094 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620104 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620114 5131 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620126 5131 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620139 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620147 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-systemd\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620151 5131 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620167 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620179 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620189 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620201 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620214 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620227 5131 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620238 5131 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620250 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620261 5131 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620272 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620281 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620291 5131 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620304 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620315 5131 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620326 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620336 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620345 5131 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620354 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.620364 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.621105 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.621638 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-script-lib\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.622338 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-env-overrides\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.623082 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-wcqw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf4gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wcqw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.623159 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad935b69-bef7-46a2-a03a-367404c13329-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.623302 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovn-node-metrics-cert\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.623888 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:50 crc kubenswrapper[5131]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ -f "/env/_master" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: set -o allexport Jan 07 09:50:50 crc kubenswrapper[5131]: source "/env/_master" Jan 07 09:50:50 crc kubenswrapper[5131]: set +o allexport Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 07 09:50:50 crc kubenswrapper[5131]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 07 09:50:50 crc kubenswrapper[5131]: --disable-webhook \ Jan 07 09:50:50 crc kubenswrapper[5131]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 07 09:50:50 crc kubenswrapper[5131]: --loglevel="${LOGLEVEL}" Jan 07 09:50:50 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:50 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.626203 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.632741 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9czf\" (UniqueName: \"kubernetes.io/projected/ad935b69-bef7-46a2-a03a-367404c13329-kube-api-access-r9czf\") pod \"ovnkube-control-plane-57b78d8988-n4kr4\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.634459 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-78wtj\" (UniqueName: \"kubernetes.io/projected/592342ad-cf5f-4290-aa15-e99a6454cbf5-kube-api-access-78wtj\") pod \"ovnkube-node-kpj7m\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.636757 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wcqw9" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.636963 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:50 crc kubenswrapper[5131]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 07 09:50:50 crc kubenswrapper[5131]: while [ true ]; Jan 07 09:50:50 crc kubenswrapper[5131]: do Jan 07 09:50:50 crc kubenswrapper[5131]: for f in $(ls /tmp/serviceca); do Jan 07 09:50:50 crc kubenswrapper[5131]: echo $f Jan 07 09:50:50 crc kubenswrapper[5131]: ca_file_path="/tmp/serviceca/${f}" Jan 07 09:50:50 crc kubenswrapper[5131]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 07 09:50:50 crc kubenswrapper[5131]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 07 09:50:50 crc kubenswrapper[5131]: if [ -e "${reg_dir_path}" ]; then Jan 07 09:50:50 crc kubenswrapper[5131]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 07 09:50:50 crc kubenswrapper[5131]: else Jan 07 09:50:50 crc kubenswrapper[5131]: mkdir $reg_dir_path Jan 07 09:50:50 crc kubenswrapper[5131]: cp $ca_file_path $reg_dir_path/ca.crt Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: done Jan 07 09:50:50 crc kubenswrapper[5131]: for d in $(ls /etc/docker/certs.d); do Jan 07 09:50:50 crc kubenswrapper[5131]: echo $d Jan 07 09:50:50 crc kubenswrapper[5131]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 07 09:50:50 crc kubenswrapper[5131]: reg_conf_path="/tmp/serviceca/${dp}" Jan 07 09:50:50 crc kubenswrapper[5131]: if [ ! -e "${reg_conf_path}" ]; then Jan 07 09:50:50 crc kubenswrapper[5131]: rm -rf /etc/docker/certs.d/$d Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: done Jan 07 09:50:50 crc kubenswrapper[5131]: sleep 60 & wait ${!} Jan 07 09:50:50 crc kubenswrapper[5131]: done Jan 07 09:50:50 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgbqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-mrsjt_openshift-image-registry(b094e1e2-9ae5-4cf3-9cef-71c25224af2a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:50 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.637491 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.637541 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.637559 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.637590 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.637607 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:50Z","lastTransitionTime":"2026-01-07T09:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.638153 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-mrsjt" podUID="b094e1e2-9ae5-4cf3-9cef-71c25224af2a" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.645901 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr9m5\" (UniqueName: \"kubernetes.io/projected/1e402924-308a-4d47-8bf8-24a147d5f8bf-kube-api-access-zr9m5\") pod \"node-resolver-mb6rx\" (UID: \"1e402924-308a-4d47-8bf8-24a147d5f8bf\") " pod="openshift-dns/node-resolver-mb6rx" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.646051 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.655052 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:50 crc kubenswrapper[5131]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 07 09:50:50 crc kubenswrapper[5131]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 07 09:50:50 crc kubenswrapper[5131]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pf4gw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-wcqw9_openshift-multus(a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:50 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.656213 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-wcqw9" podUID="a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1" Jan 07 09:50:50 crc kubenswrapper[5131]: W0107 09:50:50.659021 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b188180_f777_4a12_845b_d19fd5853d85.slice/crio-8fdc47f1d6edab3807e93dc6f212f39604e5b9c0d42bbcbbd93d37137d73ea4d WatchSource:0}: Error finding container 8fdc47f1d6edab3807e93dc6f212f39604e5b9c0d42bbcbbd93d37137d73ea4d: Status 404 returned error can't find the container with id 8fdc47f1d6edab3807e93dc6f212f39604e5b9c0d42bbcbbd93d37137d73ea4d Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.660426 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwlcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-gbjvz_openshift-multus(5b188180-f777-4a12-845b-d19fd5853d85): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.662179 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" podUID="5b188180-f777-4a12-845b-d19fd5853d85" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.662255 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.669824 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 09:50:50 crc kubenswrapper[5131]: W0107 09:50:50.673633 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3942e752_44ba_4678_8723_6cd778e60d73.slice/crio-c2334609e986d44db8273ad63e522ecc3298fe873978c100d13966a767262ad0 WatchSource:0}: Error finding container c2334609e986d44db8273ad63e522ecc3298fe873978c100d13966a767262ad0: Status 404 returned error can't find the container with id c2334609e986d44db8273ad63e522ecc3298fe873978c100d13966a767262ad0 Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.677044 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g97xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.679327 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g97xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.680514 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 09:50:50 crc kubenswrapper[5131]: W0107 09:50:50.682034 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad935b69_bef7_46a2_a03a_367404c13329.slice/crio-cf0149ee7495c3cc741d9ef73df2e4298e45d78b190a036516c810fbb965a563 WatchSource:0}: Error finding container cf0149ee7495c3cc741d9ef73df2e4298e45d78b190a036516c810fbb965a563: Status 404 returned error can't find the container with id cf0149ee7495c3cc741d9ef73df2e4298e45d78b190a036516c810fbb965a563 Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.684050 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592342ad-cf5f-4290-aa15-e99a6454cbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpj7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.684584 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:50 crc kubenswrapper[5131]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 07 09:50:50 crc kubenswrapper[5131]: set -euo pipefail Jan 07 09:50:50 crc kubenswrapper[5131]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 07 09:50:50 crc kubenswrapper[5131]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 07 09:50:50 crc kubenswrapper[5131]: # As the secret mount is optional we must wait for the files to be present. Jan 07 09:50:50 crc kubenswrapper[5131]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 07 09:50:50 crc kubenswrapper[5131]: TS=$(date +%s) Jan 07 09:50:50 crc kubenswrapper[5131]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 07 09:50:50 crc kubenswrapper[5131]: HAS_LOGGED_INFO=0 Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: log_missing_certs(){ Jan 07 09:50:50 crc kubenswrapper[5131]: CUR_TS=$(date +%s) Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 07 09:50:50 crc kubenswrapper[5131]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 07 09:50:50 crc kubenswrapper[5131]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 07 09:50:50 crc kubenswrapper[5131]: HAS_LOGGED_INFO=1 Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: } Jan 07 09:50:50 crc kubenswrapper[5131]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 07 09:50:50 crc kubenswrapper[5131]: log_missing_certs Jan 07 09:50:50 crc kubenswrapper[5131]: sleep 5 Jan 07 09:50:50 crc kubenswrapper[5131]: done Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 07 09:50:50 crc kubenswrapper[5131]: exec /usr/bin/kube-rbac-proxy \ Jan 07 09:50:50 crc kubenswrapper[5131]: --logtostderr \ Jan 07 09:50:50 crc kubenswrapper[5131]: --secure-listen-address=:9108 \ Jan 07 09:50:50 crc kubenswrapper[5131]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 07 09:50:50 crc kubenswrapper[5131]: --upstream=http://127.0.0.1:29108/ \ Jan 07 09:50:50 crc kubenswrapper[5131]: --tls-private-key-file=${TLS_PK} \ Jan 07 09:50:50 crc kubenswrapper[5131]: --tls-cert-file=${TLS_CERT} Jan 07 09:50:50 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9czf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-n4kr4_openshift-ovn-kubernetes(ad935b69-bef7-46a2-a03a-367404c13329): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:50 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.686669 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:50 crc kubenswrapper[5131]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ -f "/env/_master" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: set -o allexport Jan 07 09:50:50 crc kubenswrapper[5131]: source "/env/_master" Jan 07 09:50:50 crc kubenswrapper[5131]: set +o allexport Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: ovn_v4_join_subnet_opt= Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: ovn_v6_join_subnet_opt= Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: ovn_v4_transit_switch_subnet_opt= Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: ovn_v6_transit_switch_subnet_opt= Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: dns_name_resolver_enabled_flag= Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: # This is needed so that converting clusters from GA to TP Jan 07 09:50:50 crc kubenswrapper[5131]: # will rollout control plane pods as well Jan 07 09:50:50 crc kubenswrapper[5131]: network_segmentation_enabled_flag= Jan 07 09:50:50 crc kubenswrapper[5131]: multi_network_enabled_flag= Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "true" == "true" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: multi_network_enabled_flag="--enable-multi-network" Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "true" == "true" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "true" != "true" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: multi_network_enabled_flag="--enable-multi-network" Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: route_advertisements_enable_flag= Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: preconfigured_udn_addresses_enable_flag= Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: # Enable multi-network policy if configured (control-plane always full mode) Jan 07 09:50:50 crc kubenswrapper[5131]: multi_network_policy_enabled_flag= Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: # Enable admin network policy if configured (control-plane always full mode) Jan 07 09:50:50 crc kubenswrapper[5131]: admin_network_policy_enabled_flag= Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "true" == "true" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: if [ "shared" == "shared" ]; then Jan 07 09:50:50 crc kubenswrapper[5131]: gateway_mode_flags="--gateway-mode shared" Jan 07 09:50:50 crc kubenswrapper[5131]: elif [ "shared" == "local" ]; then Jan 07 09:50:50 crc kubenswrapper[5131]: gateway_mode_flags="--gateway-mode local" Jan 07 09:50:50 crc kubenswrapper[5131]: else Jan 07 09:50:50 crc kubenswrapper[5131]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 07 09:50:50 crc kubenswrapper[5131]: exit 1 Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 07 09:50:50 crc kubenswrapper[5131]: exec /usr/bin/ovnkube \ Jan 07 09:50:50 crc kubenswrapper[5131]: --enable-interconnect \ Jan 07 09:50:50 crc kubenswrapper[5131]: --init-cluster-manager "${K8S_NODE}" \ Jan 07 09:50:50 crc kubenswrapper[5131]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 07 09:50:50 crc kubenswrapper[5131]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 07 09:50:50 crc kubenswrapper[5131]: --metrics-bind-address "127.0.0.1:29108" \ Jan 07 09:50:50 crc kubenswrapper[5131]: --metrics-enable-pprof \ Jan 07 09:50:50 crc kubenswrapper[5131]: --metrics-enable-config-duration \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${ovn_v4_join_subnet_opt} \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${ovn_v6_join_subnet_opt} \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${dns_name_resolver_enabled_flag} \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${persistent_ips_enabled_flag} \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${multi_network_enabled_flag} \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${network_segmentation_enabled_flag} \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${gateway_mode_flags} \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${route_advertisements_enable_flag} \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${preconfigured_udn_addresses_enable_flag} \ Jan 07 09:50:50 crc kubenswrapper[5131]: --enable-egress-ip=true \ Jan 07 09:50:50 crc kubenswrapper[5131]: --enable-egress-firewall=true \ Jan 07 09:50:50 crc kubenswrapper[5131]: --enable-egress-qos=true \ Jan 07 09:50:50 crc kubenswrapper[5131]: --enable-egress-service=true \ Jan 07 09:50:50 crc kubenswrapper[5131]: --enable-multicast \ Jan 07 09:50:50 crc kubenswrapper[5131]: --enable-multi-external-gateway=true \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${multi_network_policy_enabled_flag} \ Jan 07 09:50:50 crc kubenswrapper[5131]: ${admin_network_policy_enabled_flag} Jan 07 09:50:50 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9czf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-n4kr4_openshift-ovn-kubernetes(ad935b69-bef7-46a2-a03a-367404c13329): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:50 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.687910 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" podUID="ad935b69-bef7-46a2-a03a-367404c13329" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.699281 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mb6rx" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.714636 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:50 crc kubenswrapper[5131]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 07 09:50:50 crc kubenswrapper[5131]: set -uo pipefail Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 07 09:50:50 crc kubenswrapper[5131]: HOSTS_FILE="/etc/hosts" Jan 07 09:50:50 crc kubenswrapper[5131]: TEMP_FILE="/tmp/hosts.tmp" Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: # Make a temporary file with the old hosts file's attributes. Jan 07 09:50:50 crc kubenswrapper[5131]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 07 09:50:50 crc kubenswrapper[5131]: echo "Failed to preserve hosts file. Exiting." Jan 07 09:50:50 crc kubenswrapper[5131]: exit 1 Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: while true; do Jan 07 09:50:50 crc kubenswrapper[5131]: declare -A svc_ips Jan 07 09:50:50 crc kubenswrapper[5131]: for svc in "${services[@]}"; do Jan 07 09:50:50 crc kubenswrapper[5131]: # Fetch service IP from cluster dns if present. We make several tries Jan 07 09:50:50 crc kubenswrapper[5131]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 07 09:50:50 crc kubenswrapper[5131]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 07 09:50:50 crc kubenswrapper[5131]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 07 09:50:50 crc kubenswrapper[5131]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 07 09:50:50 crc kubenswrapper[5131]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 07 09:50:50 crc kubenswrapper[5131]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 07 09:50:50 crc kubenswrapper[5131]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 07 09:50:50 crc kubenswrapper[5131]: for i in ${!cmds[*]} Jan 07 09:50:50 crc kubenswrapper[5131]: do Jan 07 09:50:50 crc kubenswrapper[5131]: ips=($(eval "${cmds[i]}")) Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: svc_ips["${svc}"]="${ips[@]}" Jan 07 09:50:50 crc kubenswrapper[5131]: break Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: done Jan 07 09:50:50 crc kubenswrapper[5131]: done Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: # Update /etc/hosts only if we get valid service IPs Jan 07 09:50:50 crc kubenswrapper[5131]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 07 09:50:50 crc kubenswrapper[5131]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 07 09:50:50 crc kubenswrapper[5131]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 07 09:50:50 crc kubenswrapper[5131]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 07 09:50:50 crc kubenswrapper[5131]: sleep 60 & wait Jan 07 09:50:50 crc kubenswrapper[5131]: continue Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: # Append resolver entries for services Jan 07 09:50:50 crc kubenswrapper[5131]: rc=0 Jan 07 09:50:50 crc kubenswrapper[5131]: for svc in "${!svc_ips[@]}"; do Jan 07 09:50:50 crc kubenswrapper[5131]: for ip in ${svc_ips[${svc}]}; do Jan 07 09:50:50 crc kubenswrapper[5131]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 07 09:50:50 crc kubenswrapper[5131]: done Jan 07 09:50:50 crc kubenswrapper[5131]: done Jan 07 09:50:50 crc kubenswrapper[5131]: if [[ $rc -ne 0 ]]; then Jan 07 09:50:50 crc kubenswrapper[5131]: sleep 60 & wait Jan 07 09:50:50 crc kubenswrapper[5131]: continue Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: Jan 07 09:50:50 crc kubenswrapper[5131]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 07 09:50:50 crc kubenswrapper[5131]: # Replace /etc/hosts with our modified version if needed Jan 07 09:50:50 crc kubenswrapper[5131]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 07 09:50:50 crc kubenswrapper[5131]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 07 09:50:50 crc kubenswrapper[5131]: fi Jan 07 09:50:50 crc kubenswrapper[5131]: sleep 60 & wait Jan 07 09:50:50 crc kubenswrapper[5131]: unset svc_ips Jan 07 09:50:50 crc kubenswrapper[5131]: done Jan 07 09:50:50 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zr9m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-mb6rx_openshift-dns(1e402924-308a-4d47-8bf8-24a147d5f8bf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:50 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.714933 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf037731-32c8-4638-9ee7-13bdb0c68279\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45412eb529aa08671402f5e439a2d0258d5e438466b13a1a3a8264e3eb9c8407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.715790 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-mb6rx" podUID="1e402924-308a-4d47-8bf8-24a147d5f8bf" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.730372 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.739599 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.739650 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.739664 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.739681 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.739693 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:50Z","lastTransitionTime":"2026-01-07T09:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:50 crc kubenswrapper[5131]: W0107 09:50:50.742967 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod592342ad_cf5f_4290_aa15_e99a6454cbf5.slice/crio-6920e97d4ae3db7ace2a35f2b0285671fe6c1cb143daeda1d12ff8dfe1d750af WatchSource:0}: Error finding container 6920e97d4ae3db7ace2a35f2b0285671fe6c1cb143daeda1d12ff8dfe1d750af: Status 404 returned error can't find the container with id 6920e97d4ae3db7ace2a35f2b0285671fe6c1cb143daeda1d12ff8dfe1d750af Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.744995 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:50 crc kubenswrapper[5131]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 07 09:50:50 crc kubenswrapper[5131]: apiVersion: v1 Jan 07 09:50:50 crc kubenswrapper[5131]: clusters: Jan 07 09:50:50 crc kubenswrapper[5131]: - cluster: Jan 07 09:50:50 crc kubenswrapper[5131]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 07 09:50:50 crc kubenswrapper[5131]: server: https://api-int.crc.testing:6443 Jan 07 09:50:50 crc kubenswrapper[5131]: name: default-cluster Jan 07 09:50:50 crc kubenswrapper[5131]: contexts: Jan 07 09:50:50 crc kubenswrapper[5131]: - context: Jan 07 09:50:50 crc kubenswrapper[5131]: cluster: default-cluster Jan 07 09:50:50 crc kubenswrapper[5131]: namespace: default Jan 07 09:50:50 crc kubenswrapper[5131]: user: default-auth Jan 07 09:50:50 crc kubenswrapper[5131]: name: default-context Jan 07 09:50:50 crc kubenswrapper[5131]: current-context: default-context Jan 07 09:50:50 crc kubenswrapper[5131]: kind: Config Jan 07 09:50:50 crc kubenswrapper[5131]: preferences: {} Jan 07 09:50:50 crc kubenswrapper[5131]: users: Jan 07 09:50:50 crc kubenswrapper[5131]: - name: default-auth Jan 07 09:50:50 crc kubenswrapper[5131]: user: Jan 07 09:50:50 crc kubenswrapper[5131]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 07 09:50:50 crc kubenswrapper[5131]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 07 09:50:50 crc kubenswrapper[5131]: EOF Jan 07 09:50:50 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78wtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-kpj7m_openshift-ovn-kubernetes(592342ad-cf5f-4290-aa15-e99a6454cbf5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:50 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.746163 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.759687 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a8b62c-1e16-4bf4-8a1a-7e21eea28a36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T09:50:19Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0107 09:50:18.874623 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0107 09:50:18.874740 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0107 09:50:18.875448 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194057718/tls.crt::/tmp/serving-cert-1194057718/tls.key\\\\\\\"\\\\nI0107 09:50:19.352672 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 09:50:19.355791 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 09:50:19.355824 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 09:50:19.355916 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 09:50:19.355934 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 09:50:19.362427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0107 09:50:19.362473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0107 09:50:19.362471 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 09:50:19.362482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 09:50:19.362512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 09:50:19.362519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 09:50:19.362527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 09:50:19.362533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 09:50:19.364774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T09:50:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:50:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.799585 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.840267 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cj94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cj94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.842147 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.842218 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.842242 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.842275 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.842298 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:50Z","lastTransitionTime":"2026-01-07T09:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.879672 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3942e752-44ba-4678-8723-6cd778e60d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvdrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.918084 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad935b69-bef7-46a2-a03a-367404c13329\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-n4kr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.922872 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.923047 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.923116 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.923189 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923120 5131 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923341 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:51.923310731 +0000 UTC m=+80.089612335 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923236 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923391 5131 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923402 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923407 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923429 5131 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923444 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:51.923431614 +0000 UTC m=+80.089733178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923447 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923467 5131 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923532 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:51.923505846 +0000 UTC m=+80.089807440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:50 crc kubenswrapper[5131]: E0107 09:50:50.923560 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:51.923548107 +0000 UTC m=+80.089849711 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.944360 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.944426 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.944450 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.944478 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.944502 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:50Z","lastTransitionTime":"2026-01-07T09:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:50 crc kubenswrapper[5131]: I0107 09:50:50.958291 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-mb6rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e402924-308a-4d47-8bf8-24a147d5f8bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mb6rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.002955 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae48c9e0-ebbd-4c8e-9c54-f6b3ac967d34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.024502 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.024622 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.024739 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:50:52.024708992 +0000 UTC m=+80.191010586 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.024746 5131 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.024816 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs podName:ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e nodeName:}" failed. No retries permitted until 2026-01-07 09:50:52.024803605 +0000 UTC m=+80.191105199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs") pod "network-metrics-daemon-5cj94" (UID: "ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.046480 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.046557 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.046582 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.046615 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.046639 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:51Z","lastTransitionTime":"2026-01-07T09:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.149050 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.149116 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.149134 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.149164 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.149182 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:51Z","lastTransitionTime":"2026-01-07T09:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.251881 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.251951 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.251969 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.251993 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.252010 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:51Z","lastTransitionTime":"2026-01-07T09:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.354299 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.354360 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.354373 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.354392 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.354408 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:51Z","lastTransitionTime":"2026-01-07T09:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.456050 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.456093 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.456104 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.456121 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.456133 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:51Z","lastTransitionTime":"2026-01-07T09:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.534461 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"8349035bea90c93726846fedaf0bd5cb049f1342e4018ae6918e4ba2db800ca8"} Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.535808 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"c200305d2f766549ff3a535abf87ee1de170c4695210076a5039ad2bfee6d400"} Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.536557 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:51 crc kubenswrapper[5131]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 07 09:50:51 crc kubenswrapper[5131]: set -o allexport Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: source /etc/kubernetes/apiserver-url.env Jan 07 09:50:51 crc kubenswrapper[5131]: else Jan 07 09:50:51 crc kubenswrapper[5131]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 07 09:50:51 crc kubenswrapper[5131]: exit 1 Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 07 09:50:51 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:51 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.537758 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.537864 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mb6rx" event={"ID":"1e402924-308a-4d47-8bf8-24a147d5f8bf","Type":"ContainerStarted","Data":"eeefe9331b3b745217923435e5a91766fe9b68c990c6aead13441a303376282e"} Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.538385 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.539413 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" event={"ID":"ad935b69-bef7-46a2-a03a-367404c13329","Type":"ContainerStarted","Data":"cf0149ee7495c3cc741d9ef73df2e4298e45d78b190a036516c810fbb965a563"} Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.539589 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.540012 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:51 crc kubenswrapper[5131]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 07 09:50:51 crc kubenswrapper[5131]: set -uo pipefail Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 07 09:50:51 crc kubenswrapper[5131]: HOSTS_FILE="/etc/hosts" Jan 07 09:50:51 crc kubenswrapper[5131]: TEMP_FILE="/tmp/hosts.tmp" Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: # Make a temporary file with the old hosts file's attributes. Jan 07 09:50:51 crc kubenswrapper[5131]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 07 09:50:51 crc kubenswrapper[5131]: echo "Failed to preserve hosts file. Exiting." Jan 07 09:50:51 crc kubenswrapper[5131]: exit 1 Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: while true; do Jan 07 09:50:51 crc kubenswrapper[5131]: declare -A svc_ips Jan 07 09:50:51 crc kubenswrapper[5131]: for svc in "${services[@]}"; do Jan 07 09:50:51 crc kubenswrapper[5131]: # Fetch service IP from cluster dns if present. We make several tries Jan 07 09:50:51 crc kubenswrapper[5131]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 07 09:50:51 crc kubenswrapper[5131]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 07 09:50:51 crc kubenswrapper[5131]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 07 09:50:51 crc kubenswrapper[5131]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 07 09:50:51 crc kubenswrapper[5131]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 07 09:50:51 crc kubenswrapper[5131]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 07 09:50:51 crc kubenswrapper[5131]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 07 09:50:51 crc kubenswrapper[5131]: for i in ${!cmds[*]} Jan 07 09:50:51 crc kubenswrapper[5131]: do Jan 07 09:50:51 crc kubenswrapper[5131]: ips=($(eval "${cmds[i]}")) Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: svc_ips["${svc}"]="${ips[@]}" Jan 07 09:50:51 crc kubenswrapper[5131]: break Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: done Jan 07 09:50:51 crc kubenswrapper[5131]: done Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: # Update /etc/hosts only if we get valid service IPs Jan 07 09:50:51 crc kubenswrapper[5131]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 07 09:50:51 crc kubenswrapper[5131]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 07 09:50:51 crc kubenswrapper[5131]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 07 09:50:51 crc kubenswrapper[5131]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 07 09:50:51 crc kubenswrapper[5131]: sleep 60 & wait Jan 07 09:50:51 crc kubenswrapper[5131]: continue Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: # Append resolver entries for services Jan 07 09:50:51 crc kubenswrapper[5131]: rc=0 Jan 07 09:50:51 crc kubenswrapper[5131]: for svc in "${!svc_ips[@]}"; do Jan 07 09:50:51 crc kubenswrapper[5131]: for ip in ${svc_ips[${svc}]}; do Jan 07 09:50:51 crc kubenswrapper[5131]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 07 09:50:51 crc kubenswrapper[5131]: done Jan 07 09:50:51 crc kubenswrapper[5131]: done Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ $rc -ne 0 ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: sleep 60 & wait Jan 07 09:50:51 crc kubenswrapper[5131]: continue Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 07 09:50:51 crc kubenswrapper[5131]: # Replace /etc/hosts with our modified version if needed Jan 07 09:50:51 crc kubenswrapper[5131]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 07 09:50:51 crc kubenswrapper[5131]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: sleep 60 & wait Jan 07 09:50:51 crc kubenswrapper[5131]: unset svc_ips Jan 07 09:50:51 crc kubenswrapper[5131]: done Jan 07 09:50:51 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zr9m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-mb6rx_openshift-dns(1e402924-308a-4d47-8bf8-24a147d5f8bf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:51 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.540996 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" event={"ID":"5b188180-f777-4a12-845b-d19fd5853d85","Type":"ContainerStarted","Data":"8fdc47f1d6edab3807e93dc6f212f39604e5b9c0d42bbcbbd93d37137d73ea4d"} Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.541369 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-mb6rx" podUID="1e402924-308a-4d47-8bf8-24a147d5f8bf" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.542082 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:51 crc kubenswrapper[5131]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 07 09:50:51 crc kubenswrapper[5131]: set -euo pipefail Jan 07 09:50:51 crc kubenswrapper[5131]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 07 09:50:51 crc kubenswrapper[5131]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 07 09:50:51 crc kubenswrapper[5131]: # As the secret mount is optional we must wait for the files to be present. Jan 07 09:50:51 crc kubenswrapper[5131]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 07 09:50:51 crc kubenswrapper[5131]: TS=$(date +%s) Jan 07 09:50:51 crc kubenswrapper[5131]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 07 09:50:51 crc kubenswrapper[5131]: HAS_LOGGED_INFO=0 Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: log_missing_certs(){ Jan 07 09:50:51 crc kubenswrapper[5131]: CUR_TS=$(date +%s) Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 07 09:50:51 crc kubenswrapper[5131]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 07 09:50:51 crc kubenswrapper[5131]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 07 09:50:51 crc kubenswrapper[5131]: HAS_LOGGED_INFO=1 Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: } Jan 07 09:50:51 crc kubenswrapper[5131]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 07 09:50:51 crc kubenswrapper[5131]: log_missing_certs Jan 07 09:50:51 crc kubenswrapper[5131]: sleep 5 Jan 07 09:50:51 crc kubenswrapper[5131]: done Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 07 09:50:51 crc kubenswrapper[5131]: exec /usr/bin/kube-rbac-proxy \ Jan 07 09:50:51 crc kubenswrapper[5131]: --logtostderr \ Jan 07 09:50:51 crc kubenswrapper[5131]: --secure-listen-address=:9108 \ Jan 07 09:50:51 crc kubenswrapper[5131]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 07 09:50:51 crc kubenswrapper[5131]: --upstream=http://127.0.0.1:29108/ \ Jan 07 09:50:51 crc kubenswrapper[5131]: --tls-private-key-file=${TLS_PK} \ Jan 07 09:50:51 crc kubenswrapper[5131]: --tls-cert-file=${TLS_CERT} Jan 07 09:50:51 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9czf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-n4kr4_openshift-ovn-kubernetes(ad935b69-bef7-46a2-a03a-367404c13329): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:51 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.543690 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wcqw9" event={"ID":"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1","Type":"ContainerStarted","Data":"b2277bbef7245215bb6b1d01a56c655f3b81abd4e30c8f853b76512becf326b8"} Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.544760 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:51 crc kubenswrapper[5131]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ -f "/env/_master" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: set -o allexport Jan 07 09:50:51 crc kubenswrapper[5131]: source "/env/_master" Jan 07 09:50:51 crc kubenswrapper[5131]: set +o allexport Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: ovn_v4_join_subnet_opt= Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: ovn_v6_join_subnet_opt= Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: ovn_v4_transit_switch_subnet_opt= Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: ovn_v6_transit_switch_subnet_opt= Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: dns_name_resolver_enabled_flag= Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: # This is needed so that converting clusters from GA to TP Jan 07 09:50:51 crc kubenswrapper[5131]: # will rollout control plane pods as well Jan 07 09:50:51 crc kubenswrapper[5131]: network_segmentation_enabled_flag= Jan 07 09:50:51 crc kubenswrapper[5131]: multi_network_enabled_flag= Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "true" == "true" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: multi_network_enabled_flag="--enable-multi-network" Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "true" == "true" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "true" != "true" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: multi_network_enabled_flag="--enable-multi-network" Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: route_advertisements_enable_flag= Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: preconfigured_udn_addresses_enable_flag= Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: # Enable multi-network policy if configured (control-plane always full mode) Jan 07 09:50:51 crc kubenswrapper[5131]: multi_network_policy_enabled_flag= Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: # Enable admin network policy if configured (control-plane always full mode) Jan 07 09:50:51 crc kubenswrapper[5131]: admin_network_policy_enabled_flag= Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ "true" == "true" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: if [ "shared" == "shared" ]; then Jan 07 09:50:51 crc kubenswrapper[5131]: gateway_mode_flags="--gateway-mode shared" Jan 07 09:50:51 crc kubenswrapper[5131]: elif [ "shared" == "local" ]; then Jan 07 09:50:51 crc kubenswrapper[5131]: gateway_mode_flags="--gateway-mode local" Jan 07 09:50:51 crc kubenswrapper[5131]: else Jan 07 09:50:51 crc kubenswrapper[5131]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 07 09:50:51 crc kubenswrapper[5131]: exit 1 Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 07 09:50:51 crc kubenswrapper[5131]: exec /usr/bin/ovnkube \ Jan 07 09:50:51 crc kubenswrapper[5131]: --enable-interconnect \ Jan 07 09:50:51 crc kubenswrapper[5131]: --init-cluster-manager "${K8S_NODE}" \ Jan 07 09:50:51 crc kubenswrapper[5131]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 07 09:50:51 crc kubenswrapper[5131]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 07 09:50:51 crc kubenswrapper[5131]: --metrics-bind-address "127.0.0.1:29108" \ Jan 07 09:50:51 crc kubenswrapper[5131]: --metrics-enable-pprof \ Jan 07 09:50:51 crc kubenswrapper[5131]: --metrics-enable-config-duration \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${ovn_v4_join_subnet_opt} \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${ovn_v6_join_subnet_opt} \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${dns_name_resolver_enabled_flag} \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${persistent_ips_enabled_flag} \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${multi_network_enabled_flag} \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${network_segmentation_enabled_flag} \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${gateway_mode_flags} \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${route_advertisements_enable_flag} \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${preconfigured_udn_addresses_enable_flag} \ Jan 07 09:50:51 crc kubenswrapper[5131]: --enable-egress-ip=true \ Jan 07 09:50:51 crc kubenswrapper[5131]: --enable-egress-firewall=true \ Jan 07 09:50:51 crc kubenswrapper[5131]: --enable-egress-qos=true \ Jan 07 09:50:51 crc kubenswrapper[5131]: --enable-egress-service=true \ Jan 07 09:50:51 crc kubenswrapper[5131]: --enable-multicast \ Jan 07 09:50:51 crc kubenswrapper[5131]: --enable-multi-external-gateway=true \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${multi_network_policy_enabled_flag} \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${admin_network_policy_enabled_flag} Jan 07 09:50:51 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9czf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-n4kr4_openshift-ovn-kubernetes(ad935b69-bef7-46a2-a03a-367404c13329): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:51 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.545673 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"c2334609e986d44db8273ad63e522ecc3298fe873978c100d13966a767262ad0"} Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.545681 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:51 crc kubenswrapper[5131]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 07 09:50:51 crc kubenswrapper[5131]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 07 09:50:51 crc kubenswrapper[5131]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pf4gw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-wcqw9_openshift-multus(a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:51 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.545705 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwlcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-gbjvz_openshift-multus(5b188180-f777-4a12-845b-d19fd5853d85): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.545912 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" podUID="ad935b69-bef7-46a2-a03a-367404c13329" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.546775 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-wcqw9" podUID="a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.546970 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" podUID="5b188180-f777-4a12-845b-d19fd5853d85" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.547389 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g97xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.547563 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"bda7cbd117853b64f0a6a0358c9704f2d2e8bdd10db11d776f2c04ce8caf4936"} Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.548908 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:51 crc kubenswrapper[5131]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ -f "/env/_master" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: set -o allexport Jan 07 09:50:51 crc kubenswrapper[5131]: source "/env/_master" Jan 07 09:50:51 crc kubenswrapper[5131]: set +o allexport Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 07 09:50:51 crc kubenswrapper[5131]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 07 09:50:51 crc kubenswrapper[5131]: ho_enable="--enable-hybrid-overlay" Jan 07 09:50:51 crc kubenswrapper[5131]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 07 09:50:51 crc kubenswrapper[5131]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 07 09:50:51 crc kubenswrapper[5131]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 07 09:50:51 crc kubenswrapper[5131]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 07 09:50:51 crc kubenswrapper[5131]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 07 09:50:51 crc kubenswrapper[5131]: --webhook-host=127.0.0.1 \ Jan 07 09:50:51 crc kubenswrapper[5131]: --webhook-port=9743 \ Jan 07 09:50:51 crc kubenswrapper[5131]: ${ho_enable} \ Jan 07 09:50:51 crc kubenswrapper[5131]: --enable-interconnect \ Jan 07 09:50:51 crc kubenswrapper[5131]: --disable-approver \ Jan 07 09:50:51 crc kubenswrapper[5131]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 07 09:50:51 crc kubenswrapper[5131]: --wait-for-kubernetes-api=200s \ Jan 07 09:50:51 crc kubenswrapper[5131]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 07 09:50:51 crc kubenswrapper[5131]: --loglevel="${LOGLEVEL}" Jan 07 09:50:51 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:51 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.549412 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerStarted","Data":"6920e97d4ae3db7ace2a35f2b0285671fe6c1cb143daeda1d12ff8dfe1d750af"} Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.552200 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:51 crc kubenswrapper[5131]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 07 09:50:51 crc kubenswrapper[5131]: apiVersion: v1 Jan 07 09:50:51 crc kubenswrapper[5131]: clusters: Jan 07 09:50:51 crc kubenswrapper[5131]: - cluster: Jan 07 09:50:51 crc kubenswrapper[5131]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 07 09:50:51 crc kubenswrapper[5131]: server: https://api-int.crc.testing:6443 Jan 07 09:50:51 crc kubenswrapper[5131]: name: default-cluster Jan 07 09:50:51 crc kubenswrapper[5131]: contexts: Jan 07 09:50:51 crc kubenswrapper[5131]: - context: Jan 07 09:50:51 crc kubenswrapper[5131]: cluster: default-cluster Jan 07 09:50:51 crc kubenswrapper[5131]: namespace: default Jan 07 09:50:51 crc kubenswrapper[5131]: user: default-auth Jan 07 09:50:51 crc kubenswrapper[5131]: name: default-context Jan 07 09:50:51 crc kubenswrapper[5131]: current-context: default-context Jan 07 09:50:51 crc kubenswrapper[5131]: kind: Config Jan 07 09:50:51 crc kubenswrapper[5131]: preferences: {} Jan 07 09:50:51 crc kubenswrapper[5131]: users: Jan 07 09:50:51 crc kubenswrapper[5131]: - name: default-auth Jan 07 09:50:51 crc kubenswrapper[5131]: user: Jan 07 09:50:51 crc kubenswrapper[5131]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 07 09:50:51 crc kubenswrapper[5131]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 07 09:50:51 crc kubenswrapper[5131]: EOF Jan 07 09:50:51 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78wtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-kpj7m_openshift-ovn-kubernetes(592342ad-cf5f-4290-aa15-e99a6454cbf5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:51 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.552361 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g97xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.552463 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:51 crc kubenswrapper[5131]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 07 09:50:51 crc kubenswrapper[5131]: if [[ -f "/env/_master" ]]; then Jan 07 09:50:51 crc kubenswrapper[5131]: set -o allexport Jan 07 09:50:51 crc kubenswrapper[5131]: source "/env/_master" Jan 07 09:50:51 crc kubenswrapper[5131]: set +o allexport Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: Jan 07 09:50:51 crc kubenswrapper[5131]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 07 09:50:51 crc kubenswrapper[5131]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 07 09:50:51 crc kubenswrapper[5131]: --disable-webhook \ Jan 07 09:50:51 crc kubenswrapper[5131]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 07 09:50:51 crc kubenswrapper[5131]: --loglevel="${LOGLEVEL}" Jan 07 09:50:51 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:51 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.553004 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mrsjt" event={"ID":"b094e1e2-9ae5-4cf3-9cef-71c25224af2a","Type":"ContainerStarted","Data":"aecbde7e20fdf57c96f3ce76d3b0b93389f2469fde4ff653a69636ef9fae261b"} Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.553289 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.553453 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.553569 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.555024 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:50:51 crc kubenswrapper[5131]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 07 09:50:51 crc kubenswrapper[5131]: while [ true ]; Jan 07 09:50:51 crc kubenswrapper[5131]: do Jan 07 09:50:51 crc kubenswrapper[5131]: for f in $(ls /tmp/serviceca); do Jan 07 09:50:51 crc kubenswrapper[5131]: echo $f Jan 07 09:50:51 crc kubenswrapper[5131]: ca_file_path="/tmp/serviceca/${f}" Jan 07 09:50:51 crc kubenswrapper[5131]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 07 09:50:51 crc kubenswrapper[5131]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 07 09:50:51 crc kubenswrapper[5131]: if [ -e "${reg_dir_path}" ]; then Jan 07 09:50:51 crc kubenswrapper[5131]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 07 09:50:51 crc kubenswrapper[5131]: else Jan 07 09:50:51 crc kubenswrapper[5131]: mkdir $reg_dir_path Jan 07 09:50:51 crc kubenswrapper[5131]: cp $ca_file_path $reg_dir_path/ca.crt Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: done Jan 07 09:50:51 crc kubenswrapper[5131]: for d in $(ls /etc/docker/certs.d); do Jan 07 09:50:51 crc kubenswrapper[5131]: echo $d Jan 07 09:50:51 crc kubenswrapper[5131]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 07 09:50:51 crc kubenswrapper[5131]: reg_conf_path="/tmp/serviceca/${dp}" Jan 07 09:50:51 crc kubenswrapper[5131]: if [ ! -e "${reg_conf_path}" ]; then Jan 07 09:50:51 crc kubenswrapper[5131]: rm -rf /etc/docker/certs.d/$d Jan 07 09:50:51 crc kubenswrapper[5131]: fi Jan 07 09:50:51 crc kubenswrapper[5131]: done Jan 07 09:50:51 crc kubenswrapper[5131]: sleep 60 & wait ${!} Jan 07 09:50:51 crc kubenswrapper[5131]: done Jan 07 09:50:51 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgbqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-mrsjt_openshift-image-registry(b094e1e2-9ae5-4cf3-9cef-71c25224af2a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:50:51 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.555748 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2562cbe-7a5f-44ee-ab23-4c3c8713b3c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8a53522c69210792aee2dce5b7e8e34b2cf22c24393e063a59b465373ab5096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e073b8c65df9f45d38018d244c88e515556561a3c3feb4a2cf3d270c77064b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ffbff8be21e181dfc3205fb877325fee8beefff7ba32e422a2619e1ab9772a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.556081 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-mrsjt" podUID="b094e1e2-9ae5-4cf3-9cef-71c25224af2a" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.561177 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.561250 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.561275 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.561307 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.561331 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:51Z","lastTransitionTime":"2026-01-07T09:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.571270 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.584294 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.604159 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b188180-f777-4a12-845b-d19fd5853d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbjvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.617351 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.629686 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.653361 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.662741 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mrsjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgbqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mrsjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.664668 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.664705 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.664714 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.664727 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.664738 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:51Z","lastTransitionTime":"2026-01-07T09:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.677908 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-wcqw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf4gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wcqw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.698978 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592342ad-cf5f-4290-aa15-e99a6454cbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpj7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.708626 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf037731-32c8-4638-9ee7-13bdb0c68279\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45412eb529aa08671402f5e439a2d0258d5e438466b13a1a3a8264e3eb9c8407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.726472 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a8b62c-1e16-4bf4-8a1a-7e21eea28a36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T09:50:19Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0107 09:50:18.874623 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0107 09:50:18.874740 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0107 09:50:18.875448 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194057718/tls.crt::/tmp/serving-cert-1194057718/tls.key\\\\\\\"\\\\nI0107 09:50:19.352672 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 09:50:19.355791 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 09:50:19.355824 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 09:50:19.355916 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 09:50:19.355934 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 09:50:19.362427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0107 09:50:19.362473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0107 09:50:19.362471 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 09:50:19.362482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 09:50:19.362512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 09:50:19.362519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 09:50:19.362527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 09:50:19.362533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 09:50:19.364774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T09:50:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:50:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.740458 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.750747 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cj94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cj94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.760584 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3942e752-44ba-4678-8723-6cd778e60d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvdrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.767629 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.767703 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.767741 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.767763 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.767775 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:51Z","lastTransitionTime":"2026-01-07T09:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.774567 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad935b69-bef7-46a2-a03a-367404c13329\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-n4kr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.786399 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-mb6rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e402924-308a-4d47-8bf8-24a147d5f8bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mb6rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.814942 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a14d49-a62a-496d-9134-f47c75840988\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://bcc1f440c98d635bb4817103fd1d9a17926b7a874f95ff484233a874c8eadeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a7484877b854cc26fd09edc6fd5c32934c1dffbbe432bfe7aff19ab695ef69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://75bb0f73ec339c2c6734121cc7d17e1fc680fd5202133c971e39ab46778e5714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbdfb2d1ed90a5108986f54b916f1abbd45a3bae0271525826521f154c84eb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3a5a8dfdfcd032674d1e587d9cbd4f65ba801617ba61300364dac7a766bcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.831759 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae48c9e0-ebbd-4c8e-9c54-f6b3ac967d34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.849681 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae48c9e0-ebbd-4c8e-9c54-f6b3ac967d34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.863441 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2562cbe-7a5f-44ee-ab23-4c3c8713b3c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8a53522c69210792aee2dce5b7e8e34b2cf22c24393e063a59b465373ab5096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e073b8c65df9f45d38018d244c88e515556561a3c3feb4a2cf3d270c77064b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ffbff8be21e181dfc3205fb877325fee8beefff7ba32e422a2619e1ab9772a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.870713 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.870772 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.870783 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.870803 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.870819 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:51Z","lastTransitionTime":"2026-01-07T09:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.882470 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.918691 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.936620 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.936735 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.936824 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.936925 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.936965 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.937070 5131 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.937106 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.937134 5131 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.937194 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:53.937159657 +0000 UTC m=+82.103461261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.937080 5131 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.937314 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:53.937215999 +0000 UTC m=+82.103517603 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.937285 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.937383 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:53.937363152 +0000 UTC m=+82.103664746 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.937408 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.937433 5131 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:51 crc kubenswrapper[5131]: E0107 09:50:51.937536 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:53.937506496 +0000 UTC m=+82.103808070 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.968928 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b188180-f777-4a12-845b-d19fd5853d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbjvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.973968 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.974032 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.974050 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.974076 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:51 crc kubenswrapper[5131]: I0107 09:50:51.974094 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:51Z","lastTransitionTime":"2026-01-07T09:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.004758 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.038324 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.038460 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:52 crc kubenswrapper[5131]: E0107 09:50:52.038550 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:50:54.038520877 +0000 UTC m=+82.204822471 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:50:52 crc kubenswrapper[5131]: E0107 09:50:52.038603 5131 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:50:52 crc kubenswrapper[5131]: E0107 09:50:52.038664 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs podName:ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e nodeName:}" failed. No retries permitted until 2026-01-07 09:50:54.03865199 +0000 UTC m=+82.204953554 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs") pod "network-metrics-daemon-5cj94" (UID: "ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.039155 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.076969 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.077016 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.077028 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.077046 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.077061 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:52Z","lastTransitionTime":"2026-01-07T09:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.079904 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.116048 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mrsjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgbqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mrsjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.163748 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-wcqw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf4gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wcqw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.179169 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.179172 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:50:52 crc kubenswrapper[5131]: E0107 09:50:52.179340 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:50:52 crc kubenswrapper[5131]: E0107 09:50:52.179501 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.179631 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:52 crc kubenswrapper[5131]: E0107 09:50:52.179790 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.180699 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.180770 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.180795 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.180826 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.180900 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:52Z","lastTransitionTime":"2026-01-07T09:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.182857 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:52 crc kubenswrapper[5131]: E0107 09:50:52.183344 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.183798 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.184437 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.185780 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.187031 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.188792 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.190185 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.191320 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.192478 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.193036 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.194232 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.194975 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.196314 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.196909 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.198262 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.198682 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.199345 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.200376 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.201317 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.202546 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.203348 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.204158 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.206030 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.206997 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.207895 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.209003 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.209944 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.211212 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.211935 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.213005 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592342ad-cf5f-4290-aa15-e99a6454cbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpj7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.213793 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.214682 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.216026 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.217161 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.218332 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.219542 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.220288 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.221128 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.221768 5131 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.222023 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.224725 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.225655 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.226796 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.272028 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.273059 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.274097 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf037731-32c8-4638-9ee7-13bdb0c68279\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45412eb529aa08671402f5e439a2d0258d5e438466b13a1a3a8264e3eb9c8407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.275584 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.278532 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.280606 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.281664 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.281854 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.281886 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.281894 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.281907 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.281916 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:52Z","lastTransitionTime":"2026-01-07T09:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.283603 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.285018 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.286531 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.287423 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.288856 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.289948 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.291614 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.293134 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.293202 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a8b62c-1e16-4bf4-8a1a-7e21eea28a36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T09:50:19Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0107 09:50:18.874623 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0107 09:50:18.874740 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0107 09:50:18.875448 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194057718/tls.crt::/tmp/serving-cert-1194057718/tls.key\\\\\\\"\\\\nI0107 09:50:19.352672 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 09:50:19.355791 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 09:50:19.355824 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 09:50:19.355916 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 09:50:19.355934 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 09:50:19.362427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0107 09:50:19.362473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0107 09:50:19.362471 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 09:50:19.362482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 09:50:19.362512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 09:50:19.362519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 09:50:19.362527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 09:50:19.362533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 09:50:19.364774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T09:50:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:50:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.294560 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.295630 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.297267 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.318821 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.358701 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cj94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cj94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.384802 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.384895 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.384917 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.384944 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.384962 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:52Z","lastTransitionTime":"2026-01-07T09:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.399933 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3942e752-44ba-4678-8723-6cd778e60d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvdrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.438986 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad935b69-bef7-46a2-a03a-367404c13329\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-n4kr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.479548 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-mb6rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e402924-308a-4d47-8bf8-24a147d5f8bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mb6rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.487400 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.487443 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.487452 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.487467 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.487479 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:52Z","lastTransitionTime":"2026-01-07T09:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.534124 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a14d49-a62a-496d-9134-f47c75840988\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://bcc1f440c98d635bb4817103fd1d9a17926b7a874f95ff484233a874c8eadeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a7484877b854cc26fd09edc6fd5c32934c1dffbbe432bfe7aff19ab695ef69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://75bb0f73ec339c2c6734121cc7d17e1fc680fd5202133c971e39ab46778e5714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbdfb2d1ed90a5108986f54b916f1abbd45a3bae0271525826521f154c84eb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3a5a8dfdfcd032674d1e587d9cbd4f65ba801617ba61300364dac7a766bcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.557538 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mrsjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgbqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mrsjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.591043 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.591142 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.591201 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.591223 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.591276 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:52Z","lastTransitionTime":"2026-01-07T09:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.597362 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-wcqw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf4gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wcqw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.650792 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592342ad-cf5f-4290-aa15-e99a6454cbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpj7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.676449 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf037731-32c8-4638-9ee7-13bdb0c68279\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45412eb529aa08671402f5e439a2d0258d5e438466b13a1a3a8264e3eb9c8407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.693469 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.693534 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.693553 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.693578 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.693596 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:52Z","lastTransitionTime":"2026-01-07T09:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.725088 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a8b62c-1e16-4bf4-8a1a-7e21eea28a36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T09:50:19Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0107 09:50:18.874623 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0107 09:50:18.874740 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0107 09:50:18.875448 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194057718/tls.crt::/tmp/serving-cert-1194057718/tls.key\\\\\\\"\\\\nI0107 09:50:19.352672 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 09:50:19.355791 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 09:50:19.355824 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 09:50:19.355916 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 09:50:19.355934 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 09:50:19.362427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0107 09:50:19.362473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0107 09:50:19.362471 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 09:50:19.362482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 09:50:19.362512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 09:50:19.362519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 09:50:19.362527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 09:50:19.362533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 09:50:19.364774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T09:50:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:50:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.757157 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.795645 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.795676 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.795688 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.795706 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.795714 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:52Z","lastTransitionTime":"2026-01-07T09:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.798629 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cj94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cj94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.839176 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3942e752-44ba-4678-8723-6cd778e60d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvdrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.876527 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad935b69-bef7-46a2-a03a-367404c13329\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-n4kr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.898044 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.898104 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.898122 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.898147 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.898165 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:52Z","lastTransitionTime":"2026-01-07T09:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.918187 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-mb6rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e402924-308a-4d47-8bf8-24a147d5f8bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mb6rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:52 crc kubenswrapper[5131]: I0107 09:50:52.974174 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a14d49-a62a-496d-9134-f47c75840988\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://bcc1f440c98d635bb4817103fd1d9a17926b7a874f95ff484233a874c8eadeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a7484877b854cc26fd09edc6fd5c32934c1dffbbe432bfe7aff19ab695ef69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://75bb0f73ec339c2c6734121cc7d17e1fc680fd5202133c971e39ab46778e5714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbdfb2d1ed90a5108986f54b916f1abbd45a3bae0271525826521f154c84eb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3a5a8dfdfcd032674d1e587d9cbd4f65ba801617ba61300364dac7a766bcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.000112 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.000176 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.000198 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.000222 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.000240 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:53Z","lastTransitionTime":"2026-01-07T09:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.002464 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae48c9e0-ebbd-4c8e-9c54-f6b3ac967d34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.040508 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2562cbe-7a5f-44ee-ab23-4c3c8713b3c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8a53522c69210792aee2dce5b7e8e34b2cf22c24393e063a59b465373ab5096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e073b8c65df9f45d38018d244c88e515556561a3c3feb4a2cf3d270c77064b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ffbff8be21e181dfc3205fb877325fee8beefff7ba32e422a2619e1ab9772a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.082048 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.102878 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.102961 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.102989 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.103023 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.103049 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:53Z","lastTransitionTime":"2026-01-07T09:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.120407 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.164140 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b188180-f777-4a12-845b-d19fd5853d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbjvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.201660 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.205498 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.205554 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.205575 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.205597 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.205615 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:53Z","lastTransitionTime":"2026-01-07T09:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.241776 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.279148 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.308958 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.309023 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.309044 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.309069 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.309088 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:53Z","lastTransitionTime":"2026-01-07T09:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.411459 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.411543 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.411565 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.411592 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.411609 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:53Z","lastTransitionTime":"2026-01-07T09:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.513995 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.514061 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.514079 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.514103 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.514121 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:53Z","lastTransitionTime":"2026-01-07T09:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.617011 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.617074 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.617094 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.617118 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.617136 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:53Z","lastTransitionTime":"2026-01-07T09:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.719725 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.719787 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.719805 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.719862 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.719882 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:53Z","lastTransitionTime":"2026-01-07T09:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.822292 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.822635 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.822770 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.822947 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.823304 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:53Z","lastTransitionTime":"2026-01-07T09:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.926247 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.926313 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.926332 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.926357 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.926376 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:53Z","lastTransitionTime":"2026-01-07T09:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.994063 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.994213 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.994267 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:53 crc kubenswrapper[5131]: I0107 09:50:53.994306 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994311 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994358 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994373 5131 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994375 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994421 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994445 5131 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994442 5131 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994448 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:57.994425312 +0000 UTC m=+86.160726956 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994607 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:57.994572805 +0000 UTC m=+86.160874409 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994634 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:57.994618827 +0000 UTC m=+86.160920491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994454 5131 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:50:53 crc kubenswrapper[5131]: E0107 09:50:53.994736 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:50:57.994705469 +0000 UTC m=+86.161007073 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.029066 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.029154 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.029182 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.029218 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.029244 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.095493 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.095584 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:50:58.095569676 +0000 UTC m=+86.261871230 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.095735 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.095859 5131 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.095894 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs podName:ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e nodeName:}" failed. No retries permitted until 2026-01-07 09:50:58.095887794 +0000 UTC m=+86.262189358 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs") pod "network-metrics-daemon-5cj94" (UID: "ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.132255 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.132310 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.132329 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.132355 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.132374 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.180308 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.180492 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.181121 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.181236 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.181298 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.181552 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.181536 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.181694 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.235276 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.235356 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.235378 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.235405 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.235429 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.337795 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.337902 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.337922 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.337953 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.337972 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.363556 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.363632 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.363647 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.363673 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.363688 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.379821 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.384665 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.385032 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.385212 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.385396 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.385554 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.401682 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.406233 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.406293 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.406313 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.406336 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.406355 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.421769 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.426764 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.426967 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.426988 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.427017 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.427040 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.441982 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.446811 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.446946 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.447004 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.447030 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.447053 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.461653 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:50:54 crc kubenswrapper[5131]: E0107 09:50:54.461978 5131 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.463371 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.463422 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.463439 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.463460 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.463482 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.565170 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.565212 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.565223 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.565238 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.565252 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.667338 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.667380 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.667393 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.667409 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.667423 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.769714 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.769752 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.769760 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.769773 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.769783 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.872507 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.872630 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.872658 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.872691 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.872715 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.976441 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.976513 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.976556 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.976579 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:54 crc kubenswrapper[5131]: I0107 09:50:54.976591 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:54Z","lastTransitionTime":"2026-01-07T09:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.079671 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.079736 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.079756 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.079780 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.079801 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:55Z","lastTransitionTime":"2026-01-07T09:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.182973 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.183059 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.183077 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.183106 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.183129 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:55Z","lastTransitionTime":"2026-01-07T09:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.285322 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.285371 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.285389 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.285412 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.285429 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:55Z","lastTransitionTime":"2026-01-07T09:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.387121 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.387175 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.387194 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.387220 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.387239 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:55Z","lastTransitionTime":"2026-01-07T09:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.489947 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.490017 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.490036 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.490062 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.490083 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:55Z","lastTransitionTime":"2026-01-07T09:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.592461 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.592549 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.592573 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.592607 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.592630 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:55Z","lastTransitionTime":"2026-01-07T09:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.695440 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.695515 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.695539 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.695570 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.695594 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:55Z","lastTransitionTime":"2026-01-07T09:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.798369 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.798429 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.798452 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.798480 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.798502 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:55Z","lastTransitionTime":"2026-01-07T09:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.901694 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.901769 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.901788 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.901813 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:55 crc kubenswrapper[5131]: I0107 09:50:55.901863 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:55Z","lastTransitionTime":"2026-01-07T09:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.004727 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.004807 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.004860 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.004893 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.004919 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:56Z","lastTransitionTime":"2026-01-07T09:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.107481 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.107567 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.107586 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.107613 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.107632 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:56Z","lastTransitionTime":"2026-01-07T09:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.180313 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.180348 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:56 crc kubenswrapper[5131]: E0107 09:50:56.180556 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.180356 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:50:56 crc kubenswrapper[5131]: E0107 09:50:56.180658 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:50:56 crc kubenswrapper[5131]: E0107 09:50:56.180935 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.180974 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:56 crc kubenswrapper[5131]: E0107 09:50:56.181132 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.210571 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.210634 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.210652 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.210676 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.210696 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:56Z","lastTransitionTime":"2026-01-07T09:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.313519 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.313573 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.313586 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.313602 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.313614 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:56Z","lastTransitionTime":"2026-01-07T09:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.416470 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.416514 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.416524 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.416539 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.416550 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:56Z","lastTransitionTime":"2026-01-07T09:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.488881 5131 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.518552 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.518603 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.518616 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.518636 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.518648 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:56Z","lastTransitionTime":"2026-01-07T09:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.621922 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.622006 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.622024 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.622049 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.622068 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:56Z","lastTransitionTime":"2026-01-07T09:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.725286 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.725383 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.725408 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.725439 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.725460 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:56Z","lastTransitionTime":"2026-01-07T09:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.828415 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.828515 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.828542 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.828577 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.828601 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:56Z","lastTransitionTime":"2026-01-07T09:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.931285 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.931344 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.931361 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.931385 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:56 crc kubenswrapper[5131]: I0107 09:50:56.931405 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:56Z","lastTransitionTime":"2026-01-07T09:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.034021 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.034432 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.034453 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.034479 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.034497 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:57Z","lastTransitionTime":"2026-01-07T09:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.136594 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.136667 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.136687 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.136713 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.136733 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:57Z","lastTransitionTime":"2026-01-07T09:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.239440 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.239546 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.239576 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.239608 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.239641 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:57Z","lastTransitionTime":"2026-01-07T09:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.342525 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.342599 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.342623 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.342656 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.342680 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:57Z","lastTransitionTime":"2026-01-07T09:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.445039 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.445113 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.445133 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.445161 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.445179 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:57Z","lastTransitionTime":"2026-01-07T09:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.547707 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.547774 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.547792 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.547819 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.547868 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:57Z","lastTransitionTime":"2026-01-07T09:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.650152 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.650214 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.650225 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.650243 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.650256 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:57Z","lastTransitionTime":"2026-01-07T09:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.752862 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.752944 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.752971 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.752998 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.753017 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:57Z","lastTransitionTime":"2026-01-07T09:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.855897 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.856007 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.856026 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.856093 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.856116 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:57Z","lastTransitionTime":"2026-01-07T09:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.958797 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.958921 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.958943 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.958971 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:57 crc kubenswrapper[5131]: I0107 09:50:57.958990 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:57Z","lastTransitionTime":"2026-01-07T09:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.044240 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.044331 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.044386 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.044421 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.044488 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.044533 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.044562 5131 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.044649 5131 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.044706 5131 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.044661 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:06.044636854 +0000 UTC m=+94.210938448 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.044883 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:06.044795968 +0000 UTC m=+94.211097572 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.044925 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:06.044909001 +0000 UTC m=+94.211210615 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.045091 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.045113 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.045133 5131 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.045192 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:06.045177638 +0000 UTC m=+94.211479242 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.061967 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.062024 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.062041 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.062064 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.062082 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:58Z","lastTransitionTime":"2026-01-07T09:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.145979 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.146148 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.146277 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:06.146237 +0000 UTC m=+94.312538604 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.146313 5131 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.146406 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs podName:ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e nodeName:}" failed. No retries permitted until 2026-01-07 09:51:06.146378313 +0000 UTC m=+94.312679907 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs") pod "network-metrics-daemon-5cj94" (UID: "ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.165229 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.165286 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.165306 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.165330 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.165347 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:58Z","lastTransitionTime":"2026-01-07T09:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.180000 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.180173 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.180197 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.180270 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.180414 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.180609 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.180602 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:50:58 crc kubenswrapper[5131]: E0107 09:50:58.180764 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.268204 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.268301 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.268326 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.268352 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.268372 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:58Z","lastTransitionTime":"2026-01-07T09:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.371616 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.371760 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.371786 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.371823 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.371880 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:58Z","lastTransitionTime":"2026-01-07T09:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.474747 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.474889 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.474918 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.474951 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.474974 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:58Z","lastTransitionTime":"2026-01-07T09:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.576958 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.577041 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.577054 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.577073 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.577086 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:58Z","lastTransitionTime":"2026-01-07T09:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.679449 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.679486 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.679494 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.679508 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.679518 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:58Z","lastTransitionTime":"2026-01-07T09:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.781647 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.781741 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.781763 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.781788 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.781814 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:58Z","lastTransitionTime":"2026-01-07T09:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.884282 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.884325 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.884366 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.884379 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.884388 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:58Z","lastTransitionTime":"2026-01-07T09:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.987484 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.987532 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.987545 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.987562 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:58 crc kubenswrapper[5131]: I0107 09:50:58.987575 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:58Z","lastTransitionTime":"2026-01-07T09:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.090414 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.090486 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.090504 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.090529 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.090546 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:59Z","lastTransitionTime":"2026-01-07T09:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.192996 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.193066 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.193085 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.193110 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.193128 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:59Z","lastTransitionTime":"2026-01-07T09:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.295894 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.295979 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.296003 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.296040 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.296063 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:59Z","lastTransitionTime":"2026-01-07T09:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.398372 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.398452 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.398463 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.398479 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.398487 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:59Z","lastTransitionTime":"2026-01-07T09:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.500417 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.500475 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.500492 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.500513 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.500528 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:59Z","lastTransitionTime":"2026-01-07T09:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.602419 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.602480 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.602498 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.602521 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.602538 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:59Z","lastTransitionTime":"2026-01-07T09:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.704347 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.704385 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.704397 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.704410 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.704419 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:59Z","lastTransitionTime":"2026-01-07T09:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.806791 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.806885 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.806913 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.806937 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.806954 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:59Z","lastTransitionTime":"2026-01-07T09:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.909658 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.909701 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.909728 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.909740 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:50:59 crc kubenswrapper[5131]: I0107 09:50:59.909752 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:50:59Z","lastTransitionTime":"2026-01-07T09:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.013112 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.013181 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.013199 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.013220 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.013233 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:00Z","lastTransitionTime":"2026-01-07T09:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.116829 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.116912 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.116933 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.116957 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.116975 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:00Z","lastTransitionTime":"2026-01-07T09:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.179259 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:00 crc kubenswrapper[5131]: E0107 09:51:00.179461 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.179287 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.179515 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:00 crc kubenswrapper[5131]: E0107 09:51:00.179655 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.179722 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:00 crc kubenswrapper[5131]: E0107 09:51:00.179824 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:00 crc kubenswrapper[5131]: E0107 09:51:00.179940 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.219327 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.219390 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.219407 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.219432 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.219451 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:00Z","lastTransitionTime":"2026-01-07T09:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.321937 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.322062 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.322085 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.322121 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.322142 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:00Z","lastTransitionTime":"2026-01-07T09:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.424617 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.424676 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.424699 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.424724 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.424742 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:00Z","lastTransitionTime":"2026-01-07T09:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.527181 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.527250 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.527269 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.527293 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.527311 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:00Z","lastTransitionTime":"2026-01-07T09:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.630149 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.630216 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.630243 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.630275 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.630297 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:00Z","lastTransitionTime":"2026-01-07T09:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.732296 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.732375 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.732398 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.732422 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.732440 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:00Z","lastTransitionTime":"2026-01-07T09:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.835680 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.835763 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.835788 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.835818 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.835872 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:00Z","lastTransitionTime":"2026-01-07T09:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.939709 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.939801 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.939864 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.939902 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:00 crc kubenswrapper[5131]: I0107 09:51:00.939926 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:00Z","lastTransitionTime":"2026-01-07T09:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.042385 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.042437 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.042449 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.042465 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.042476 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:01Z","lastTransitionTime":"2026-01-07T09:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.144967 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.145050 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.145099 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.145151 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.145179 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:01Z","lastTransitionTime":"2026-01-07T09:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.247913 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.247998 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.248024 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.248058 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.248080 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:01Z","lastTransitionTime":"2026-01-07T09:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.351277 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.351354 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.351373 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.351397 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.351415 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:01Z","lastTransitionTime":"2026-01-07T09:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.453565 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.453630 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.453649 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.453673 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.453689 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:01Z","lastTransitionTime":"2026-01-07T09:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.555757 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.555822 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.555879 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.555904 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.555925 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:01Z","lastTransitionTime":"2026-01-07T09:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.658390 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.658449 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.658463 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.658481 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.658495 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:01Z","lastTransitionTime":"2026-01-07T09:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.761492 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.761579 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.761603 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.761632 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.761652 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:01Z","lastTransitionTime":"2026-01-07T09:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.864424 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.864487 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.864506 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.864535 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.864556 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:01Z","lastTransitionTime":"2026-01-07T09:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.966982 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.967053 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.967073 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.967098 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:01 crc kubenswrapper[5131]: I0107 09:51:01.967116 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:01Z","lastTransitionTime":"2026-01-07T09:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.069661 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.069741 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.069759 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.069783 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.069800 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:02Z","lastTransitionTime":"2026-01-07T09:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.172653 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.172737 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.172763 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.172793 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.172815 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:02Z","lastTransitionTime":"2026-01-07T09:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.179173 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:02 crc kubenswrapper[5131]: E0107 09:51:02.179382 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.179533 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.179645 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.179566 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:02 crc kubenswrapper[5131]: E0107 09:51:02.179924 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:02 crc kubenswrapper[5131]: E0107 09:51:02.179979 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:02 crc kubenswrapper[5131]: E0107 09:51:02.180115 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:02 crc kubenswrapper[5131]: E0107 09:51:02.183713 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwlcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-gbjvz_openshift-multus(5b188180-f777-4a12-845b-d19fd5853d85): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:51:02 crc kubenswrapper[5131]: E0107 09:51:02.185069 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" podUID="5b188180-f777-4a12-845b-d19fd5853d85" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.200431 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-wcqw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf4gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wcqw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.228762 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592342ad-cf5f-4290-aa15-e99a6454cbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpj7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.241713 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf037731-32c8-4638-9ee7-13bdb0c68279\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45412eb529aa08671402f5e439a2d0258d5e438466b13a1a3a8264e3eb9c8407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.261026 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a8b62c-1e16-4bf4-8a1a-7e21eea28a36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T09:50:19Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0107 09:50:18.874623 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0107 09:50:18.874740 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0107 09:50:18.875448 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194057718/tls.crt::/tmp/serving-cert-1194057718/tls.key\\\\\\\"\\\\nI0107 09:50:19.352672 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 09:50:19.355791 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 09:50:19.355824 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 09:50:19.355916 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 09:50:19.355934 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 09:50:19.362427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0107 09:50:19.362473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0107 09:50:19.362471 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 09:50:19.362482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 09:50:19.362512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 09:50:19.362519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 09:50:19.362527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 09:50:19.362533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 09:50:19.364774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T09:50:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:50:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.277760 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.277889 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.277921 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.277955 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.277982 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:02Z","lastTransitionTime":"2026-01-07T09:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.279085 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.292981 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cj94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cj94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.306668 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3942e752-44ba-4678-8723-6cd778e60d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvdrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.320477 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad935b69-bef7-46a2-a03a-367404c13329\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-n4kr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.330987 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-mb6rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e402924-308a-4d47-8bf8-24a147d5f8bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mb6rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.360507 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a14d49-a62a-496d-9134-f47c75840988\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://bcc1f440c98d635bb4817103fd1d9a17926b7a874f95ff484233a874c8eadeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a7484877b854cc26fd09edc6fd5c32934c1dffbbe432bfe7aff19ab695ef69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://75bb0f73ec339c2c6734121cc7d17e1fc680fd5202133c971e39ab46778e5714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbdfb2d1ed90a5108986f54b916f1abbd45a3bae0271525826521f154c84eb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3a5a8dfdfcd032674d1e587d9cbd4f65ba801617ba61300364dac7a766bcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.375924 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae48c9e0-ebbd-4c8e-9c54-f6b3ac967d34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.381042 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.381099 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.381116 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.381141 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.381159 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:02Z","lastTransitionTime":"2026-01-07T09:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.390618 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2562cbe-7a5f-44ee-ab23-4c3c8713b3c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8a53522c69210792aee2dce5b7e8e34b2cf22c24393e063a59b465373ab5096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e073b8c65df9f45d38018d244c88e515556561a3c3feb4a2cf3d270c77064b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ffbff8be21e181dfc3205fb877325fee8beefff7ba32e422a2619e1ab9772a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.411685 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.424313 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.438585 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b188180-f777-4a12-845b-d19fd5853d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbjvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.453415 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.468322 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.483227 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.483623 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.483679 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.483698 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.483725 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.483744 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:02Z","lastTransitionTime":"2026-01-07T09:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.493252 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mrsjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgbqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mrsjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.563356 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.578298 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.587667 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.587729 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.587746 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.587770 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.587787 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:02Z","lastTransitionTime":"2026-01-07T09:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.595537 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.611753 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.622548 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mrsjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgbqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mrsjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.637647 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-wcqw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf4gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wcqw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.663229 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592342ad-cf5f-4290-aa15-e99a6454cbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpj7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.673053 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf037731-32c8-4638-9ee7-13bdb0c68279\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45412eb529aa08671402f5e439a2d0258d5e438466b13a1a3a8264e3eb9c8407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.690986 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.691051 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.691070 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.691098 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.691117 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:02Z","lastTransitionTime":"2026-01-07T09:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.691979 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a8b62c-1e16-4bf4-8a1a-7e21eea28a36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T09:50:19Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0107 09:50:18.874623 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0107 09:50:18.874740 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0107 09:50:18.875448 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194057718/tls.crt::/tmp/serving-cert-1194057718/tls.key\\\\\\\"\\\\nI0107 09:50:19.352672 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 09:50:19.355791 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 09:50:19.355824 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 09:50:19.355916 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 09:50:19.355934 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 09:50:19.362427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0107 09:50:19.362473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0107 09:50:19.362471 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 09:50:19.362482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 09:50:19.362512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 09:50:19.362519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 09:50:19.362527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 09:50:19.362533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 09:50:19.364774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T09:50:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:50:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.706946 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.717912 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cj94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cj94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.730187 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3942e752-44ba-4678-8723-6cd778e60d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvdrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.742500 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad935b69-bef7-46a2-a03a-367404c13329\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-n4kr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.754065 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-mb6rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e402924-308a-4d47-8bf8-24a147d5f8bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mb6rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.783483 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a14d49-a62a-496d-9134-f47c75840988\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://bcc1f440c98d635bb4817103fd1d9a17926b7a874f95ff484233a874c8eadeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a7484877b854cc26fd09edc6fd5c32934c1dffbbe432bfe7aff19ab695ef69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://75bb0f73ec339c2c6734121cc7d17e1fc680fd5202133c971e39ab46778e5714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbdfb2d1ed90a5108986f54b916f1abbd45a3bae0271525826521f154c84eb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3a5a8dfdfcd032674d1e587d9cbd4f65ba801617ba61300364dac7a766bcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.793515 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.793586 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.793611 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.793644 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.793669 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:02Z","lastTransitionTime":"2026-01-07T09:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.802872 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae48c9e0-ebbd-4c8e-9c54-f6b3ac967d34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.819153 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2562cbe-7a5f-44ee-ab23-4c3c8713b3c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8a53522c69210792aee2dce5b7e8e34b2cf22c24393e063a59b465373ab5096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e073b8c65df9f45d38018d244c88e515556561a3c3feb4a2cf3d270c77064b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ffbff8be21e181dfc3205fb877325fee8beefff7ba32e422a2619e1ab9772a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.834445 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.847597 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.865648 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b188180-f777-4a12-845b-d19fd5853d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbjvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.896388 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.896487 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.896535 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.896558 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.896599 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:02Z","lastTransitionTime":"2026-01-07T09:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.999667 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.999728 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.999746 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.999769 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:02 crc kubenswrapper[5131]: I0107 09:51:02.999787 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:02Z","lastTransitionTime":"2026-01-07T09:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.101873 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.101916 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.101959 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.101992 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.102013 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:03Z","lastTransitionTime":"2026-01-07T09:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:03 crc kubenswrapper[5131]: E0107 09:51:03.182503 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:51:03 crc kubenswrapper[5131]: E0107 09:51:03.184142 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 07 09:51:03 crc kubenswrapper[5131]: E0107 09:51:03.185027 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:51:03 crc kubenswrapper[5131]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 07 09:51:03 crc kubenswrapper[5131]: set -euo pipefail Jan 07 09:51:03 crc kubenswrapper[5131]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 07 09:51:03 crc kubenswrapper[5131]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 07 09:51:03 crc kubenswrapper[5131]: # As the secret mount is optional we must wait for the files to be present. Jan 07 09:51:03 crc kubenswrapper[5131]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 07 09:51:03 crc kubenswrapper[5131]: TS=$(date +%s) Jan 07 09:51:03 crc kubenswrapper[5131]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 07 09:51:03 crc kubenswrapper[5131]: HAS_LOGGED_INFO=0 Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: log_missing_certs(){ Jan 07 09:51:03 crc kubenswrapper[5131]: CUR_TS=$(date +%s) Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 07 09:51:03 crc kubenswrapper[5131]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 07 09:51:03 crc kubenswrapper[5131]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 07 09:51:03 crc kubenswrapper[5131]: HAS_LOGGED_INFO=1 Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: } Jan 07 09:51:03 crc kubenswrapper[5131]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 07 09:51:03 crc kubenswrapper[5131]: log_missing_certs Jan 07 09:51:03 crc kubenswrapper[5131]: sleep 5 Jan 07 09:51:03 crc kubenswrapper[5131]: done Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 07 09:51:03 crc kubenswrapper[5131]: exec /usr/bin/kube-rbac-proxy \ Jan 07 09:51:03 crc kubenswrapper[5131]: --logtostderr \ Jan 07 09:51:03 crc kubenswrapper[5131]: --secure-listen-address=:9108 \ Jan 07 09:51:03 crc kubenswrapper[5131]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 07 09:51:03 crc kubenswrapper[5131]: --upstream=http://127.0.0.1:29108/ \ Jan 07 09:51:03 crc kubenswrapper[5131]: --tls-private-key-file=${TLS_PK} \ Jan 07 09:51:03 crc kubenswrapper[5131]: --tls-cert-file=${TLS_CERT} Jan 07 09:51:03 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9czf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-n4kr4_openshift-ovn-kubernetes(ad935b69-bef7-46a2-a03a-367404c13329): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:51:03 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:51:03 crc kubenswrapper[5131]: E0107 09:51:03.190327 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:51:03 crc kubenswrapper[5131]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ -f "/env/_master" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: set -o allexport Jan 07 09:51:03 crc kubenswrapper[5131]: source "/env/_master" Jan 07 09:51:03 crc kubenswrapper[5131]: set +o allexport Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: ovn_v4_join_subnet_opt= Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: ovn_v6_join_subnet_opt= Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: ovn_v4_transit_switch_subnet_opt= Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: ovn_v6_transit_switch_subnet_opt= Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "" != "" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: dns_name_resolver_enabled_flag= Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: # This is needed so that converting clusters from GA to TP Jan 07 09:51:03 crc kubenswrapper[5131]: # will rollout control plane pods as well Jan 07 09:51:03 crc kubenswrapper[5131]: network_segmentation_enabled_flag= Jan 07 09:51:03 crc kubenswrapper[5131]: multi_network_enabled_flag= Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "true" == "true" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: multi_network_enabled_flag="--enable-multi-network" Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "true" == "true" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "true" != "true" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: multi_network_enabled_flag="--enable-multi-network" Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: route_advertisements_enable_flag= Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: preconfigured_udn_addresses_enable_flag= Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: # Enable multi-network policy if configured (control-plane always full mode) Jan 07 09:51:03 crc kubenswrapper[5131]: multi_network_policy_enabled_flag= Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "false" == "true" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: # Enable admin network policy if configured (control-plane always full mode) Jan 07 09:51:03 crc kubenswrapper[5131]: admin_network_policy_enabled_flag= Jan 07 09:51:03 crc kubenswrapper[5131]: if [[ "true" == "true" ]]; then Jan 07 09:51:03 crc kubenswrapper[5131]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: if [ "shared" == "shared" ]; then Jan 07 09:51:03 crc kubenswrapper[5131]: gateway_mode_flags="--gateway-mode shared" Jan 07 09:51:03 crc kubenswrapper[5131]: elif [ "shared" == "local" ]; then Jan 07 09:51:03 crc kubenswrapper[5131]: gateway_mode_flags="--gateway-mode local" Jan 07 09:51:03 crc kubenswrapper[5131]: else Jan 07 09:51:03 crc kubenswrapper[5131]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 07 09:51:03 crc kubenswrapper[5131]: exit 1 Jan 07 09:51:03 crc kubenswrapper[5131]: fi Jan 07 09:51:03 crc kubenswrapper[5131]: Jan 07 09:51:03 crc kubenswrapper[5131]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 07 09:51:03 crc kubenswrapper[5131]: exec /usr/bin/ovnkube \ Jan 07 09:51:03 crc kubenswrapper[5131]: --enable-interconnect \ Jan 07 09:51:03 crc kubenswrapper[5131]: --init-cluster-manager "${K8S_NODE}" \ Jan 07 09:51:03 crc kubenswrapper[5131]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 07 09:51:03 crc kubenswrapper[5131]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 07 09:51:03 crc kubenswrapper[5131]: --metrics-bind-address "127.0.0.1:29108" \ Jan 07 09:51:03 crc kubenswrapper[5131]: --metrics-enable-pprof \ Jan 07 09:51:03 crc kubenswrapper[5131]: --metrics-enable-config-duration \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${ovn_v4_join_subnet_opt} \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${ovn_v6_join_subnet_opt} \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${dns_name_resolver_enabled_flag} \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${persistent_ips_enabled_flag} \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${multi_network_enabled_flag} \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${network_segmentation_enabled_flag} \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${gateway_mode_flags} \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${route_advertisements_enable_flag} \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${preconfigured_udn_addresses_enable_flag} \ Jan 07 09:51:03 crc kubenswrapper[5131]: --enable-egress-ip=true \ Jan 07 09:51:03 crc kubenswrapper[5131]: --enable-egress-firewall=true \ Jan 07 09:51:03 crc kubenswrapper[5131]: --enable-egress-qos=true \ Jan 07 09:51:03 crc kubenswrapper[5131]: --enable-egress-service=true \ Jan 07 09:51:03 crc kubenswrapper[5131]: --enable-multicast \ Jan 07 09:51:03 crc kubenswrapper[5131]: --enable-multi-external-gateway=true \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${multi_network_policy_enabled_flag} \ Jan 07 09:51:03 crc kubenswrapper[5131]: ${admin_network_policy_enabled_flag} Jan 07 09:51:03 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9czf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-n4kr4_openshift-ovn-kubernetes(ad935b69-bef7-46a2-a03a-367404c13329): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:51:03 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:51:03 crc kubenswrapper[5131]: E0107 09:51:03.193978 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" podUID="ad935b69-bef7-46a2-a03a-367404c13329" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.203823 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.203919 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.203947 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.203971 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.203989 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:03Z","lastTransitionTime":"2026-01-07T09:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.306267 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.306311 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.306329 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.306352 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.306369 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:03Z","lastTransitionTime":"2026-01-07T09:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.408650 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.408700 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.408716 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.408737 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.408755 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:03Z","lastTransitionTime":"2026-01-07T09:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.511618 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.511694 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.511714 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.511739 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.511758 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:03Z","lastTransitionTime":"2026-01-07T09:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.613554 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.613630 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.613649 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.613671 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.613690 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:03Z","lastTransitionTime":"2026-01-07T09:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.716596 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.716662 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.716679 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.716704 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.716722 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:03Z","lastTransitionTime":"2026-01-07T09:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.819473 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.819536 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.819554 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.819578 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.819595 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:03Z","lastTransitionTime":"2026-01-07T09:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.922273 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.922317 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.922329 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.922348 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:03 crc kubenswrapper[5131]: I0107 09:51:03.922359 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:03Z","lastTransitionTime":"2026-01-07T09:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.025464 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.025600 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.025632 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.025668 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.025690 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.128127 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.128229 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.128243 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.128262 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.128274 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.180282 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.180501 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.180564 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.180284 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.180740 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.180560 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.181224 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.181984 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.184913 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:51:04 crc kubenswrapper[5131]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 07 09:51:04 crc kubenswrapper[5131]: if [[ -f "/env/_master" ]]; then Jan 07 09:51:04 crc kubenswrapper[5131]: set -o allexport Jan 07 09:51:04 crc kubenswrapper[5131]: source "/env/_master" Jan 07 09:51:04 crc kubenswrapper[5131]: set +o allexport Jan 07 09:51:04 crc kubenswrapper[5131]: fi Jan 07 09:51:04 crc kubenswrapper[5131]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 07 09:51:04 crc kubenswrapper[5131]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 07 09:51:04 crc kubenswrapper[5131]: ho_enable="--enable-hybrid-overlay" Jan 07 09:51:04 crc kubenswrapper[5131]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 07 09:51:04 crc kubenswrapper[5131]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 07 09:51:04 crc kubenswrapper[5131]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 07 09:51:04 crc kubenswrapper[5131]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 07 09:51:04 crc kubenswrapper[5131]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 07 09:51:04 crc kubenswrapper[5131]: --webhook-host=127.0.0.1 \ Jan 07 09:51:04 crc kubenswrapper[5131]: --webhook-port=9743 \ Jan 07 09:51:04 crc kubenswrapper[5131]: ${ho_enable} \ Jan 07 09:51:04 crc kubenswrapper[5131]: --enable-interconnect \ Jan 07 09:51:04 crc kubenswrapper[5131]: --disable-approver \ Jan 07 09:51:04 crc kubenswrapper[5131]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 07 09:51:04 crc kubenswrapper[5131]: --wait-for-kubernetes-api=200s \ Jan 07 09:51:04 crc kubenswrapper[5131]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 07 09:51:04 crc kubenswrapper[5131]: --loglevel="${LOGLEVEL}" Jan 07 09:51:04 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:51:04 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.185215 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:51:04 crc kubenswrapper[5131]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 07 09:51:04 crc kubenswrapper[5131]: set -uo pipefail Jan 07 09:51:04 crc kubenswrapper[5131]: Jan 07 09:51:04 crc kubenswrapper[5131]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 07 09:51:04 crc kubenswrapper[5131]: Jan 07 09:51:04 crc kubenswrapper[5131]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 07 09:51:04 crc kubenswrapper[5131]: HOSTS_FILE="/etc/hosts" Jan 07 09:51:04 crc kubenswrapper[5131]: TEMP_FILE="/tmp/hosts.tmp" Jan 07 09:51:04 crc kubenswrapper[5131]: Jan 07 09:51:04 crc kubenswrapper[5131]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 07 09:51:04 crc kubenswrapper[5131]: Jan 07 09:51:04 crc kubenswrapper[5131]: # Make a temporary file with the old hosts file's attributes. Jan 07 09:51:04 crc kubenswrapper[5131]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 07 09:51:04 crc kubenswrapper[5131]: echo "Failed to preserve hosts file. Exiting." Jan 07 09:51:04 crc kubenswrapper[5131]: exit 1 Jan 07 09:51:04 crc kubenswrapper[5131]: fi Jan 07 09:51:04 crc kubenswrapper[5131]: Jan 07 09:51:04 crc kubenswrapper[5131]: while true; do Jan 07 09:51:04 crc kubenswrapper[5131]: declare -A svc_ips Jan 07 09:51:04 crc kubenswrapper[5131]: for svc in "${services[@]}"; do Jan 07 09:51:04 crc kubenswrapper[5131]: # Fetch service IP from cluster dns if present. We make several tries Jan 07 09:51:04 crc kubenswrapper[5131]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 07 09:51:04 crc kubenswrapper[5131]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 07 09:51:04 crc kubenswrapper[5131]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 07 09:51:04 crc kubenswrapper[5131]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 07 09:51:04 crc kubenswrapper[5131]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 07 09:51:04 crc kubenswrapper[5131]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 07 09:51:04 crc kubenswrapper[5131]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 07 09:51:04 crc kubenswrapper[5131]: for i in ${!cmds[*]} Jan 07 09:51:04 crc kubenswrapper[5131]: do Jan 07 09:51:04 crc kubenswrapper[5131]: ips=($(eval "${cmds[i]}")) Jan 07 09:51:04 crc kubenswrapper[5131]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 07 09:51:04 crc kubenswrapper[5131]: svc_ips["${svc}"]="${ips[@]}" Jan 07 09:51:04 crc kubenswrapper[5131]: break Jan 07 09:51:04 crc kubenswrapper[5131]: fi Jan 07 09:51:04 crc kubenswrapper[5131]: done Jan 07 09:51:04 crc kubenswrapper[5131]: done Jan 07 09:51:04 crc kubenswrapper[5131]: Jan 07 09:51:04 crc kubenswrapper[5131]: # Update /etc/hosts only if we get valid service IPs Jan 07 09:51:04 crc kubenswrapper[5131]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 07 09:51:04 crc kubenswrapper[5131]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 07 09:51:04 crc kubenswrapper[5131]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 07 09:51:04 crc kubenswrapper[5131]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 07 09:51:04 crc kubenswrapper[5131]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 07 09:51:04 crc kubenswrapper[5131]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 07 09:51:04 crc kubenswrapper[5131]: sleep 60 & wait Jan 07 09:51:04 crc kubenswrapper[5131]: continue Jan 07 09:51:04 crc kubenswrapper[5131]: fi Jan 07 09:51:04 crc kubenswrapper[5131]: Jan 07 09:51:04 crc kubenswrapper[5131]: # Append resolver entries for services Jan 07 09:51:04 crc kubenswrapper[5131]: rc=0 Jan 07 09:51:04 crc kubenswrapper[5131]: for svc in "${!svc_ips[@]}"; do Jan 07 09:51:04 crc kubenswrapper[5131]: for ip in ${svc_ips[${svc}]}; do Jan 07 09:51:04 crc kubenswrapper[5131]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 07 09:51:04 crc kubenswrapper[5131]: done Jan 07 09:51:04 crc kubenswrapper[5131]: done Jan 07 09:51:04 crc kubenswrapper[5131]: if [[ $rc -ne 0 ]]; then Jan 07 09:51:04 crc kubenswrapper[5131]: sleep 60 & wait Jan 07 09:51:04 crc kubenswrapper[5131]: continue Jan 07 09:51:04 crc kubenswrapper[5131]: fi Jan 07 09:51:04 crc kubenswrapper[5131]: Jan 07 09:51:04 crc kubenswrapper[5131]: Jan 07 09:51:04 crc kubenswrapper[5131]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 07 09:51:04 crc kubenswrapper[5131]: # Replace /etc/hosts with our modified version if needed Jan 07 09:51:04 crc kubenswrapper[5131]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 07 09:51:04 crc kubenswrapper[5131]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 07 09:51:04 crc kubenswrapper[5131]: fi Jan 07 09:51:04 crc kubenswrapper[5131]: sleep 60 & wait Jan 07 09:51:04 crc kubenswrapper[5131]: unset svc_ips Jan 07 09:51:04 crc kubenswrapper[5131]: done Jan 07 09:51:04 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zr9m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-mb6rx_openshift-dns(1e402924-308a-4d47-8bf8-24a147d5f8bf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:51:04 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.185452 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:51:04 crc kubenswrapper[5131]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 07 09:51:04 crc kubenswrapper[5131]: set -o allexport Jan 07 09:51:04 crc kubenswrapper[5131]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 07 09:51:04 crc kubenswrapper[5131]: source /etc/kubernetes/apiserver-url.env Jan 07 09:51:04 crc kubenswrapper[5131]: else Jan 07 09:51:04 crc kubenswrapper[5131]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 07 09:51:04 crc kubenswrapper[5131]: exit 1 Jan 07 09:51:04 crc kubenswrapper[5131]: fi Jan 07 09:51:04 crc kubenswrapper[5131]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 07 09:51:04 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:51:04 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.185437 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:51:04 crc kubenswrapper[5131]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 07 09:51:04 crc kubenswrapper[5131]: while [ true ]; Jan 07 09:51:04 crc kubenswrapper[5131]: do Jan 07 09:51:04 crc kubenswrapper[5131]: for f in $(ls /tmp/serviceca); do Jan 07 09:51:04 crc kubenswrapper[5131]: echo $f Jan 07 09:51:04 crc kubenswrapper[5131]: ca_file_path="/tmp/serviceca/${f}" Jan 07 09:51:04 crc kubenswrapper[5131]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 07 09:51:04 crc kubenswrapper[5131]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 07 09:51:04 crc kubenswrapper[5131]: if [ -e "${reg_dir_path}" ]; then Jan 07 09:51:04 crc kubenswrapper[5131]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 07 09:51:04 crc kubenswrapper[5131]: else Jan 07 09:51:04 crc kubenswrapper[5131]: mkdir $reg_dir_path Jan 07 09:51:04 crc kubenswrapper[5131]: cp $ca_file_path $reg_dir_path/ca.crt Jan 07 09:51:04 crc kubenswrapper[5131]: fi Jan 07 09:51:04 crc kubenswrapper[5131]: done Jan 07 09:51:04 crc kubenswrapper[5131]: for d in $(ls /etc/docker/certs.d); do Jan 07 09:51:04 crc kubenswrapper[5131]: echo $d Jan 07 09:51:04 crc kubenswrapper[5131]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 07 09:51:04 crc kubenswrapper[5131]: reg_conf_path="/tmp/serviceca/${dp}" Jan 07 09:51:04 crc kubenswrapper[5131]: if [ ! -e "${reg_conf_path}" ]; then Jan 07 09:51:04 crc kubenswrapper[5131]: rm -rf /etc/docker/certs.d/$d Jan 07 09:51:04 crc kubenswrapper[5131]: fi Jan 07 09:51:04 crc kubenswrapper[5131]: done Jan 07 09:51:04 crc kubenswrapper[5131]: sleep 60 & wait ${!} Jan 07 09:51:04 crc kubenswrapper[5131]: done Jan 07 09:51:04 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgbqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-mrsjt_openshift-image-registry(b094e1e2-9ae5-4cf3-9cef-71c25224af2a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:51:04 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.187055 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-mb6rx" podUID="1e402924-308a-4d47-8bf8-24a147d5f8bf" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.187181 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-mrsjt" podUID="b094e1e2-9ae5-4cf3-9cef-71c25224af2a" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.189300 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:51:04 crc kubenswrapper[5131]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 07 09:51:04 crc kubenswrapper[5131]: if [[ -f "/env/_master" ]]; then Jan 07 09:51:04 crc kubenswrapper[5131]: set -o allexport Jan 07 09:51:04 crc kubenswrapper[5131]: source "/env/_master" Jan 07 09:51:04 crc kubenswrapper[5131]: set +o allexport Jan 07 09:51:04 crc kubenswrapper[5131]: fi Jan 07 09:51:04 crc kubenswrapper[5131]: Jan 07 09:51:04 crc kubenswrapper[5131]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 07 09:51:04 crc kubenswrapper[5131]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 07 09:51:04 crc kubenswrapper[5131]: --disable-webhook \ Jan 07 09:51:04 crc kubenswrapper[5131]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 07 09:51:04 crc kubenswrapper[5131]: --loglevel="${LOGLEVEL}" Jan 07 09:51:04 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:51:04 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.191404 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.192787 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.232084 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.232193 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.232215 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.232243 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.232263 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.334776 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.334823 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.334862 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.334879 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.334889 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.437755 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.437827 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.437882 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.437915 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.437932 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.484905 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.485034 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.485064 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.485100 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.485125 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.501391 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.506191 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.506246 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.506270 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.506297 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.506319 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.524243 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.529502 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.529575 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.529594 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.529620 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.529640 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.544506 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.549295 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.549389 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.549411 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.549434 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.549451 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.563694 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.568418 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.568490 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.568511 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.568542 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.568563 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.583989 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:04 crc kubenswrapper[5131]: E0107 09:51:04.584266 5131 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.586111 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.586172 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.586191 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.586217 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.586236 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.688874 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.688938 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.688956 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.688980 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.688997 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.690328 5131 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.790913 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.791010 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.791036 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.791066 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.791090 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.894235 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.894320 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.894338 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.894376 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.894391 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.997391 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.997477 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.997493 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.997518 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:04 crc kubenswrapper[5131]: I0107 09:51:04.997535 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:04Z","lastTransitionTime":"2026-01-07T09:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.099829 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.099943 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.099964 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.099995 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.100015 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:05Z","lastTransitionTime":"2026-01-07T09:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:05 crc kubenswrapper[5131]: E0107 09:51:05.183225 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g97xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:51:05 crc kubenswrapper[5131]: E0107 09:51:05.183600 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:51:05 crc kubenswrapper[5131]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 07 09:51:05 crc kubenswrapper[5131]: apiVersion: v1 Jan 07 09:51:05 crc kubenswrapper[5131]: clusters: Jan 07 09:51:05 crc kubenswrapper[5131]: - cluster: Jan 07 09:51:05 crc kubenswrapper[5131]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 07 09:51:05 crc kubenswrapper[5131]: server: https://api-int.crc.testing:6443 Jan 07 09:51:05 crc kubenswrapper[5131]: name: default-cluster Jan 07 09:51:05 crc kubenswrapper[5131]: contexts: Jan 07 09:51:05 crc kubenswrapper[5131]: - context: Jan 07 09:51:05 crc kubenswrapper[5131]: cluster: default-cluster Jan 07 09:51:05 crc kubenswrapper[5131]: namespace: default Jan 07 09:51:05 crc kubenswrapper[5131]: user: default-auth Jan 07 09:51:05 crc kubenswrapper[5131]: name: default-context Jan 07 09:51:05 crc kubenswrapper[5131]: current-context: default-context Jan 07 09:51:05 crc kubenswrapper[5131]: kind: Config Jan 07 09:51:05 crc kubenswrapper[5131]: preferences: {} Jan 07 09:51:05 crc kubenswrapper[5131]: users: Jan 07 09:51:05 crc kubenswrapper[5131]: - name: default-auth Jan 07 09:51:05 crc kubenswrapper[5131]: user: Jan 07 09:51:05 crc kubenswrapper[5131]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 07 09:51:05 crc kubenswrapper[5131]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 07 09:51:05 crc kubenswrapper[5131]: EOF Jan 07 09:51:05 crc kubenswrapper[5131]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78wtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-kpj7m_openshift-ovn-kubernetes(592342ad-cf5f-4290-aa15-e99a6454cbf5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:51:05 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:51:05 crc kubenswrapper[5131]: E0107 09:51:05.183815 5131 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 07 09:51:05 crc kubenswrapper[5131]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 07 09:51:05 crc kubenswrapper[5131]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 07 09:51:05 crc kubenswrapper[5131]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pf4gw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-wcqw9_openshift-multus(a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 07 09:51:05 crc kubenswrapper[5131]: > logger="UnhandledError" Jan 07 09:51:05 crc kubenswrapper[5131]: E0107 09:51:05.184868 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" Jan 07 09:51:05 crc kubenswrapper[5131]: E0107 09:51:05.184974 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-wcqw9" podUID="a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1" Jan 07 09:51:05 crc kubenswrapper[5131]: E0107 09:51:05.186329 5131 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g97xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 07 09:51:05 crc kubenswrapper[5131]: E0107 09:51:05.187628 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.202537 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.202592 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.202609 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.202630 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.202648 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:05Z","lastTransitionTime":"2026-01-07T09:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.304907 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.304979 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.305000 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.305029 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.305047 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:05Z","lastTransitionTime":"2026-01-07T09:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.382330 5131 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.408403 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.408495 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.408528 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.408556 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.408579 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:05Z","lastTransitionTime":"2026-01-07T09:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.511673 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.512797 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.513065 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.513218 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.513360 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:05Z","lastTransitionTime":"2026-01-07T09:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.617298 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.617395 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.617421 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.617448 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.617466 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:05Z","lastTransitionTime":"2026-01-07T09:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.719884 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.720022 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.720049 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.720132 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.720161 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:05Z","lastTransitionTime":"2026-01-07T09:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.823602 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.823692 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.823719 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.823745 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.823763 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:05Z","lastTransitionTime":"2026-01-07T09:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.926364 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.926459 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.926486 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.926515 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:05 crc kubenswrapper[5131]: I0107 09:51:05.926533 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:05Z","lastTransitionTime":"2026-01-07T09:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.029005 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.029085 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.029105 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.029145 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.029179 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:06Z","lastTransitionTime":"2026-01-07T09:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.048387 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.048445 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.048472 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.048529 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.048679 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.048678 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.048730 5131 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.048803 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:22.048781656 +0000 UTC m=+110.215083240 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.048813 5131 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.048729 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.049031 5131 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.048697 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.049104 5131 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.048959 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:22.048931509 +0000 UTC m=+110.215233103 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.049177 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:22.049162575 +0000 UTC m=+110.215464179 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.049252 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:22.049239637 +0000 UTC m=+110.215541231 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.132567 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.132667 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.132688 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.132718 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.132736 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:06Z","lastTransitionTime":"2026-01-07T09:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.149411 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.149628 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:22.149593332 +0000 UTC m=+110.315894906 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.149756 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.149954 5131 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.150099 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs podName:ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e nodeName:}" failed. No retries permitted until 2026-01-07 09:51:22.150066354 +0000 UTC m=+110.316367958 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs") pod "network-metrics-daemon-5cj94" (UID: "ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.180097 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.180254 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.180371 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.180526 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.180556 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.180621 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.180642 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:06 crc kubenswrapper[5131]: E0107 09:51:06.180689 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.235515 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.235605 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.235640 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.235676 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.235697 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:06Z","lastTransitionTime":"2026-01-07T09:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.338225 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.338300 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.338321 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.338347 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.338368 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:06Z","lastTransitionTime":"2026-01-07T09:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.441276 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.441366 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.441388 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.441416 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.441433 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:06Z","lastTransitionTime":"2026-01-07T09:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.544309 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.544378 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.544403 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.544430 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.544453 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:06Z","lastTransitionTime":"2026-01-07T09:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.646283 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.646332 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.646344 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.646362 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.646375 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:06Z","lastTransitionTime":"2026-01-07T09:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.749032 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.749112 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.749140 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.749170 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.749192 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:06Z","lastTransitionTime":"2026-01-07T09:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.851577 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.851678 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.851700 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.851725 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.851743 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:06Z","lastTransitionTime":"2026-01-07T09:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.954420 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.954534 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.954554 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.954577 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:06 crc kubenswrapper[5131]: I0107 09:51:06.954594 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:06Z","lastTransitionTime":"2026-01-07T09:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.056790 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.056875 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.056894 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.056916 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.056933 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:07Z","lastTransitionTime":"2026-01-07T09:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.159856 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.159972 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.160029 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.160064 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.160117 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:07Z","lastTransitionTime":"2026-01-07T09:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.263439 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.263545 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.263581 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.263618 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.263639 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:07Z","lastTransitionTime":"2026-01-07T09:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.367130 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.367203 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.367217 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.367242 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.367261 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:07Z","lastTransitionTime":"2026-01-07T09:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.470384 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.470466 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.470479 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.470500 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.470516 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:07Z","lastTransitionTime":"2026-01-07T09:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.574360 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.574402 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.574413 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.574430 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.574441 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:07Z","lastTransitionTime":"2026-01-07T09:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.677409 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.677498 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.677519 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.677544 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.677561 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:07Z","lastTransitionTime":"2026-01-07T09:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.780388 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.780528 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.780563 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.780598 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.780639 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:07Z","lastTransitionTime":"2026-01-07T09:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.883295 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.883363 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.883381 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.883405 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.883424 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:07Z","lastTransitionTime":"2026-01-07T09:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.986552 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.986641 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.986662 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.986689 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:07 crc kubenswrapper[5131]: I0107 09:51:07.986709 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:07Z","lastTransitionTime":"2026-01-07T09:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.089804 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.089895 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.089914 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.089936 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.089954 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:08Z","lastTransitionTime":"2026-01-07T09:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.179583 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.179623 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:08 crc kubenswrapper[5131]: E0107 09:51:08.179773 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.179820 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.179889 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:08 crc kubenswrapper[5131]: E0107 09:51:08.180041 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:08 crc kubenswrapper[5131]: E0107 09:51:08.180174 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:08 crc kubenswrapper[5131]: E0107 09:51:08.181599 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.193215 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.193262 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.193280 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.193302 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.193320 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:08Z","lastTransitionTime":"2026-01-07T09:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.296008 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.296078 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.296102 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.296131 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.296156 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:08Z","lastTransitionTime":"2026-01-07T09:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.398904 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.398970 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.398993 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.399018 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.399036 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:08Z","lastTransitionTime":"2026-01-07T09:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.502127 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.502208 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.502229 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.502257 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.502275 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:08Z","lastTransitionTime":"2026-01-07T09:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.604544 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.604614 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.604635 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.604665 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.604689 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:08Z","lastTransitionTime":"2026-01-07T09:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.706925 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.707014 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.707042 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.707075 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.707101 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:08Z","lastTransitionTime":"2026-01-07T09:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.809907 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.810028 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.810064 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.810097 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.810125 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:08Z","lastTransitionTime":"2026-01-07T09:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.912645 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.912699 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.912713 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.912738 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:08 crc kubenswrapper[5131]: I0107 09:51:08.912754 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:08Z","lastTransitionTime":"2026-01-07T09:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.014999 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.015057 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.015074 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.015096 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.015111 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:09Z","lastTransitionTime":"2026-01-07T09:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.117522 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.117567 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.117577 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.117592 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.117603 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:09Z","lastTransitionTime":"2026-01-07T09:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.220791 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.220893 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.220919 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.220947 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.220968 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:09Z","lastTransitionTime":"2026-01-07T09:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.328819 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.328904 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.328922 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.328947 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.328965 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:09Z","lastTransitionTime":"2026-01-07T09:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.431074 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.431363 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.431474 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.431590 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.431730 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:09Z","lastTransitionTime":"2026-01-07T09:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.534390 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.534675 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.534778 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.534897 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.534994 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:09Z","lastTransitionTime":"2026-01-07T09:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.637171 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.637241 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.637259 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.637285 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.637303 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:09Z","lastTransitionTime":"2026-01-07T09:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.740217 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.740530 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.740670 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.740797 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.740973 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:09Z","lastTransitionTime":"2026-01-07T09:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.843389 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.843462 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.843484 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.843508 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.843528 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:09Z","lastTransitionTime":"2026-01-07T09:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.950587 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.950691 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.950724 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.950758 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:09 crc kubenswrapper[5131]: I0107 09:51:09.950792 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:09Z","lastTransitionTime":"2026-01-07T09:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.054395 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.054513 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.054542 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.054576 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.054601 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:10Z","lastTransitionTime":"2026-01-07T09:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.157342 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.157393 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.157403 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.157422 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.157434 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:10Z","lastTransitionTime":"2026-01-07T09:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.179421 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.179503 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:10 crc kubenswrapper[5131]: E0107 09:51:10.179629 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.179769 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:10 crc kubenswrapper[5131]: E0107 09:51:10.180011 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.180113 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:10 crc kubenswrapper[5131]: E0107 09:51:10.180213 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:10 crc kubenswrapper[5131]: E0107 09:51:10.180328 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.260113 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.260183 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.260203 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.260227 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.260249 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:10Z","lastTransitionTime":"2026-01-07T09:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.363010 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.363247 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.363369 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.363460 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.363550 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:10Z","lastTransitionTime":"2026-01-07T09:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.465799 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.465872 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.465885 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.465904 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.465917 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:10Z","lastTransitionTime":"2026-01-07T09:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.567927 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.568325 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.568472 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.568610 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.568739 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:10Z","lastTransitionTime":"2026-01-07T09:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.671086 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.671932 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.671972 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.672021 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.672047 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:10Z","lastTransitionTime":"2026-01-07T09:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.774385 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.774459 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.774477 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.774502 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.774520 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:10Z","lastTransitionTime":"2026-01-07T09:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.876952 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.877056 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.877074 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.877099 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.877118 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:10Z","lastTransitionTime":"2026-01-07T09:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.979531 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.979596 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.979613 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.979636 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:10 crc kubenswrapper[5131]: I0107 09:51:10.979690 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:10Z","lastTransitionTime":"2026-01-07T09:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.082928 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.082990 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.083003 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.083027 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.083040 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:11Z","lastTransitionTime":"2026-01-07T09:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.185475 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.185912 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.185998 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.186076 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.186141 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:11Z","lastTransitionTime":"2026-01-07T09:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.288317 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.288383 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.288394 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.288419 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.288804 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:11Z","lastTransitionTime":"2026-01-07T09:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.391389 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.391449 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.391462 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.391480 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.391497 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:11Z","lastTransitionTime":"2026-01-07T09:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.494252 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.494319 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.494339 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.494364 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.494382 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:11Z","lastTransitionTime":"2026-01-07T09:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.596881 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.596972 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.597007 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.597036 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.597056 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:11Z","lastTransitionTime":"2026-01-07T09:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.699550 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.699637 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.699664 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.699692 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.699714 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:11Z","lastTransitionTime":"2026-01-07T09:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.802217 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.802307 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.802323 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.802342 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.802356 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:11Z","lastTransitionTime":"2026-01-07T09:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.904601 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.904651 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.904665 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.904681 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:11 crc kubenswrapper[5131]: I0107 09:51:11.904694 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:11Z","lastTransitionTime":"2026-01-07T09:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.006897 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.006984 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.007009 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.007041 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.007065 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:12Z","lastTransitionTime":"2026-01-07T09:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.110608 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.110681 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.110701 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.110726 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.110744 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:12Z","lastTransitionTime":"2026-01-07T09:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.179377 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.179516 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:12 crc kubenswrapper[5131]: E0107 09:51:12.180985 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.179583 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:12 crc kubenswrapper[5131]: E0107 09:51:12.181108 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.179542 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:12 crc kubenswrapper[5131]: E0107 09:51:12.182384 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:12 crc kubenswrapper[5131]: E0107 09:51:12.182590 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.192902 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cj94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cj94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.207083 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3942e752-44ba-4678-8723-6cd778e60d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvdrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.212793 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.212829 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.212875 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.212900 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.212916 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:12Z","lastTransitionTime":"2026-01-07T09:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.220634 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad935b69-bef7-46a2-a03a-367404c13329\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-n4kr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.229458 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-mb6rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e402924-308a-4d47-8bf8-24a147d5f8bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mb6rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.251505 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a14d49-a62a-496d-9134-f47c75840988\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://bcc1f440c98d635bb4817103fd1d9a17926b7a874f95ff484233a874c8eadeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a7484877b854cc26fd09edc6fd5c32934c1dffbbe432bfe7aff19ab695ef69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://75bb0f73ec339c2c6734121cc7d17e1fc680fd5202133c971e39ab46778e5714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbdfb2d1ed90a5108986f54b916f1abbd45a3bae0271525826521f154c84eb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3a5a8dfdfcd032674d1e587d9cbd4f65ba801617ba61300364dac7a766bcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.272496 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae48c9e0-ebbd-4c8e-9c54-f6b3ac967d34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.290417 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2562cbe-7a5f-44ee-ab23-4c3c8713b3c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8a53522c69210792aee2dce5b7e8e34b2cf22c24393e063a59b465373ab5096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e073b8c65df9f45d38018d244c88e515556561a3c3feb4a2cf3d270c77064b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ffbff8be21e181dfc3205fb877325fee8beefff7ba32e422a2619e1ab9772a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.310109 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.315145 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.315221 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.315246 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.315282 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.315307 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:12Z","lastTransitionTime":"2026-01-07T09:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.324972 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.350605 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b188180-f777-4a12-845b-d19fd5853d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbjvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.366829 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.384303 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.400607 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.412477 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mrsjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgbqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mrsjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.418058 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.418124 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.418166 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.418200 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.418225 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:12Z","lastTransitionTime":"2026-01-07T09:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.426692 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-wcqw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf4gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wcqw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.452817 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592342ad-cf5f-4290-aa15-e99a6454cbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpj7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.465143 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf037731-32c8-4638-9ee7-13bdb0c68279\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45412eb529aa08671402f5e439a2d0258d5e438466b13a1a3a8264e3eb9c8407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.479630 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a8b62c-1e16-4bf4-8a1a-7e21eea28a36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T09:50:19Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0107 09:50:18.874623 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0107 09:50:18.874740 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0107 09:50:18.875448 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194057718/tls.crt::/tmp/serving-cert-1194057718/tls.key\\\\\\\"\\\\nI0107 09:50:19.352672 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 09:50:19.355791 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 09:50:19.355824 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 09:50:19.355916 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 09:50:19.355934 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 09:50:19.362427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0107 09:50:19.362473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0107 09:50:19.362471 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 09:50:19.362482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 09:50:19.362512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 09:50:19.362519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 09:50:19.362527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 09:50:19.362533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 09:50:19.364774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T09:50:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:50:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.492176 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.521613 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.521683 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.521702 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.521727 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.521747 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:12Z","lastTransitionTime":"2026-01-07T09:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.624079 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.624417 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.624554 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.624724 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.624957 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:12Z","lastTransitionTime":"2026-01-07T09:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.727577 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.728517 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.728699 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.728860 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.729015 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:12Z","lastTransitionTime":"2026-01-07T09:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.831208 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.831451 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.831626 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.831767 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.831987 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:12Z","lastTransitionTime":"2026-01-07T09:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.934716 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.934795 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.934823 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.934889 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:12 crc kubenswrapper[5131]: I0107 09:51:12.934915 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:12Z","lastTransitionTime":"2026-01-07T09:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.037209 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.037557 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.037729 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.037971 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.038192 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:13Z","lastTransitionTime":"2026-01-07T09:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.140448 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.140499 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.140522 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.140547 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.140565 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:13Z","lastTransitionTime":"2026-01-07T09:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.243303 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.243580 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.243781 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.244074 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.244282 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:13Z","lastTransitionTime":"2026-01-07T09:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.347006 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.347296 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.347482 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.347698 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.347995 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:13Z","lastTransitionTime":"2026-01-07T09:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.450212 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.450280 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.450299 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.450325 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.450343 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:13Z","lastTransitionTime":"2026-01-07T09:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.552559 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.552904 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.553288 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.553710 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.553932 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:13Z","lastTransitionTime":"2026-01-07T09:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.656640 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.656943 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.657116 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.657292 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.657467 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:13Z","lastTransitionTime":"2026-01-07T09:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.760541 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.761454 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.761870 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.762329 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.762573 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:13Z","lastTransitionTime":"2026-01-07T09:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.865458 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.865794 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.866043 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.866256 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.866410 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:13Z","lastTransitionTime":"2026-01-07T09:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.970955 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.971260 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.971376 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.971482 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:13 crc kubenswrapper[5131]: I0107 09:51:13.971612 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:13Z","lastTransitionTime":"2026-01-07T09:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.074113 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.074428 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.074608 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.074751 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.074953 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.178266 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.178347 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.178371 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.178401 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.178425 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.179396 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:14 crc kubenswrapper[5131]: E0107 09:51:14.179677 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.179494 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.179768 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.179774 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:14 crc kubenswrapper[5131]: E0107 09:51:14.179928 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:14 crc kubenswrapper[5131]: E0107 09:51:14.180234 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:14 crc kubenswrapper[5131]: E0107 09:51:14.180644 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.280821 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.281105 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.281118 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.281134 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.281149 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.385198 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.385281 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.385301 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.385329 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.385349 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.487662 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.487766 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.487827 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.487888 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.487908 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.591336 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.591412 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.591432 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.591458 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.591476 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.693924 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.693983 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.693996 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.694017 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.694030 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.704559 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.704640 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.704670 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.704702 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.704726 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: E0107 09:51:14.720064 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.728561 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.728631 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.728650 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.728675 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.728696 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: E0107 09:51:14.740729 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.745156 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.745261 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.745283 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.745347 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.745369 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: E0107 09:51:14.759576 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.763331 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.763388 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.763405 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.763429 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.763446 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: E0107 09:51:14.777680 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.781353 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.781529 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.781651 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.781803 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.781990 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: E0107 09:51:14.794983 5131 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd75f290-f432-4d83-b44b-78dd53c6e94f\\\",\\\"systemUUID\\\":\\\"8ea6fa36-73d5-4d37-aab0-72c44945d452\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:14 crc kubenswrapper[5131]: E0107 09:51:14.795554 5131 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.797124 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.797304 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.797469 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.797606 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.797744 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.900415 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.900466 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.900483 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.900503 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:14 crc kubenswrapper[5131]: I0107 09:51:14.900521 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:14Z","lastTransitionTime":"2026-01-07T09:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.002713 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.002768 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.002793 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.002819 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.002879 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:15Z","lastTransitionTime":"2026-01-07T09:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.105124 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.105207 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.105244 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.105275 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.105297 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:15Z","lastTransitionTime":"2026-01-07T09:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.207317 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.207407 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.207496 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.207536 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.207562 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:15Z","lastTransitionTime":"2026-01-07T09:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.310486 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.310549 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.310567 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.310596 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.310614 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:15Z","lastTransitionTime":"2026-01-07T09:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.413516 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.413583 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.413602 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.413627 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.413644 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:15Z","lastTransitionTime":"2026-01-07T09:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.516826 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.517263 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.517398 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.517585 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.517741 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:15Z","lastTransitionTime":"2026-01-07T09:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.621285 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.621696 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.621863 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.622007 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.622143 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:15Z","lastTransitionTime":"2026-01-07T09:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.724103 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.724156 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.724168 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.724186 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.724201 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:15Z","lastTransitionTime":"2026-01-07T09:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.826274 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.826332 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.826350 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.826377 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.826395 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:15Z","lastTransitionTime":"2026-01-07T09:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.928560 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.928617 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.928635 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.928660 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:15 crc kubenswrapper[5131]: I0107 09:51:15.928680 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:15Z","lastTransitionTime":"2026-01-07T09:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.030967 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.031021 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.031032 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.031049 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.031062 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:16Z","lastTransitionTime":"2026-01-07T09:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.133867 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.133939 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.133958 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.133988 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.134007 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:16Z","lastTransitionTime":"2026-01-07T09:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.179804 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.180019 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:16 crc kubenswrapper[5131]: E0107 09:51:16.180021 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:16 crc kubenswrapper[5131]: E0107 09:51:16.180158 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.180551 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:16 crc kubenswrapper[5131]: E0107 09:51:16.180741 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.180776 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:16 crc kubenswrapper[5131]: E0107 09:51:16.181896 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.235984 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.236066 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.236095 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.236120 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.236139 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:16Z","lastTransitionTime":"2026-01-07T09:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.339282 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.339337 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.339355 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.339377 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.339395 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:16Z","lastTransitionTime":"2026-01-07T09:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.441587 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.441646 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.441662 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.441688 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.441704 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:16Z","lastTransitionTime":"2026-01-07T09:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.544715 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.544769 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.544785 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.544806 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.544822 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:16Z","lastTransitionTime":"2026-01-07T09:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.629529 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mb6rx" event={"ID":"1e402924-308a-4d47-8bf8-24a147d5f8bf","Type":"ContainerStarted","Data":"7de2783b949cae55fd91f3318b8de983905e30eb006a314a7e6d42ee6f3e7df9"} Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.641582 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf037731-32c8-4638-9ee7-13bdb0c68279\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45412eb529aa08671402f5e439a2d0258d5e438466b13a1a3a8264e3eb9c8407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.646914 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.646975 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.646995 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.647020 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.647039 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:16Z","lastTransitionTime":"2026-01-07T09:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.662966 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a8b62c-1e16-4bf4-8a1a-7e21eea28a36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T09:50:19Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0107 09:50:18.874623 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0107 09:50:18.874740 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0107 09:50:18.875448 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194057718/tls.crt::/tmp/serving-cert-1194057718/tls.key\\\\\\\"\\\\nI0107 09:50:19.352672 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 09:50:19.355791 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 09:50:19.355824 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 09:50:19.355916 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 09:50:19.355934 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 09:50:19.362427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0107 09:50:19.362473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0107 09:50:19.362471 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 09:50:19.362482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 09:50:19.362512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 09:50:19.362519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 09:50:19.362527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 09:50:19.362533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 09:50:19.364774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T09:50:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:50:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.678663 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.692127 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cj94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cj94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.705410 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3942e752-44ba-4678-8723-6cd778e60d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvdrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.718607 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad935b69-bef7-46a2-a03a-367404c13329\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-n4kr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.730167 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-mb6rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e402924-308a-4d47-8bf8-24a147d5f8bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://7de2783b949cae55fd91f3318b8de983905e30eb006a314a7e6d42ee6f3e7df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:51:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mb6rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.749387 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.749436 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.749455 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.749479 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.749497 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:16Z","lastTransitionTime":"2026-01-07T09:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.760662 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a14d49-a62a-496d-9134-f47c75840988\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://bcc1f440c98d635bb4817103fd1d9a17926b7a874f95ff484233a874c8eadeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a7484877b854cc26fd09edc6fd5c32934c1dffbbe432bfe7aff19ab695ef69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://75bb0f73ec339c2c6734121cc7d17e1fc680fd5202133c971e39ab46778e5714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbdfb2d1ed90a5108986f54b916f1abbd45a3bae0271525826521f154c84eb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3a5a8dfdfcd032674d1e587d9cbd4f65ba801617ba61300364dac7a766bcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.781502 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae48c9e0-ebbd-4c8e-9c54-f6b3ac967d34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.797476 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2562cbe-7a5f-44ee-ab23-4c3c8713b3c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8a53522c69210792aee2dce5b7e8e34b2cf22c24393e063a59b465373ab5096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e073b8c65df9f45d38018d244c88e515556561a3c3feb4a2cf3d270c77064b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ffbff8be21e181dfc3205fb877325fee8beefff7ba32e422a2619e1ab9772a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.814503 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.828564 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.846692 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b188180-f777-4a12-845b-d19fd5853d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbjvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.851775 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.851895 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.851922 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.851954 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.851980 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:16Z","lastTransitionTime":"2026-01-07T09:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.861361 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.874307 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.889003 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.900876 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mrsjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgbqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mrsjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.919353 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-wcqw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf4gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wcqw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.942392 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592342ad-cf5f-4290-aa15-e99a6454cbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpj7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.954763 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.954879 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.954906 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.954935 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:16 crc kubenswrapper[5131]: I0107 09:51:16.954956 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:16Z","lastTransitionTime":"2026-01-07T09:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.057402 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.057470 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.057489 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.057514 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.057533 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:17Z","lastTransitionTime":"2026-01-07T09:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.160075 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.160129 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.160147 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.160173 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.160192 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:17Z","lastTransitionTime":"2026-01-07T09:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.262517 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.262575 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.262592 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.262617 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.262635 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:17Z","lastTransitionTime":"2026-01-07T09:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.365317 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.365362 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.365379 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.365400 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.365416 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:17Z","lastTransitionTime":"2026-01-07T09:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.467052 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.467102 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.467118 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.467141 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.467158 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:17Z","lastTransitionTime":"2026-01-07T09:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.569713 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.569899 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.569931 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.569961 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.569983 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:17Z","lastTransitionTime":"2026-01-07T09:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.633973 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" event={"ID":"5b188180-f777-4a12-845b-d19fd5853d85","Type":"ContainerStarted","Data":"865dddfbe060ddcb71042d177754b25d863c2a48a46859c2b8b5b918dbaca5d2"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.636297 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wcqw9" event={"ID":"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1","Type":"ContainerStarted","Data":"6ec74912c138c89ccad68970857fdf60edfad26d60a9ecc7be033be6f8349b05"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.653051 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b188180-f777-4a12-845b-d19fd5853d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865dddfbe060ddcb71042d177754b25d863c2a48a46859c2b8b5b918dbaca5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:51:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbjvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.666156 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.672223 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.672271 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.672288 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.672311 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.672328 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:17Z","lastTransitionTime":"2026-01-07T09:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.682925 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.697344 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.709513 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mrsjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgbqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mrsjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.719722 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-wcqw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf4gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wcqw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.733381 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592342ad-cf5f-4290-aa15-e99a6454cbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpj7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.742417 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf037731-32c8-4638-9ee7-13bdb0c68279\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45412eb529aa08671402f5e439a2d0258d5e438466b13a1a3a8264e3eb9c8407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.758822 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a8b62c-1e16-4bf4-8a1a-7e21eea28a36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T09:50:19Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0107 09:50:18.874623 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0107 09:50:18.874740 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0107 09:50:18.875448 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194057718/tls.crt::/tmp/serving-cert-1194057718/tls.key\\\\\\\"\\\\nI0107 09:50:19.352672 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 09:50:19.355791 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 09:50:19.355824 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 09:50:19.355916 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 09:50:19.355934 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 09:50:19.362427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0107 09:50:19.362473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0107 09:50:19.362471 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 09:50:19.362482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 09:50:19.362512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 09:50:19.362519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 09:50:19.362527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 09:50:19.362533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 09:50:19.364774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T09:50:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:50:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.768119 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.774698 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.774729 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.774740 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.774757 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.774768 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:17Z","lastTransitionTime":"2026-01-07T09:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.775807 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cj94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cj94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.787934 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3942e752-44ba-4678-8723-6cd778e60d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvdrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.798467 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad935b69-bef7-46a2-a03a-367404c13329\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-n4kr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.808213 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-mb6rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e402924-308a-4d47-8bf8-24a147d5f8bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://7de2783b949cae55fd91f3318b8de983905e30eb006a314a7e6d42ee6f3e7df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:51:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mb6rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.838327 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a14d49-a62a-496d-9134-f47c75840988\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://bcc1f440c98d635bb4817103fd1d9a17926b7a874f95ff484233a874c8eadeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a7484877b854cc26fd09edc6fd5c32934c1dffbbe432bfe7aff19ab695ef69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://75bb0f73ec339c2c6734121cc7d17e1fc680fd5202133c971e39ab46778e5714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbdfb2d1ed90a5108986f54b916f1abbd45a3bae0271525826521f154c84eb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3a5a8dfdfcd032674d1e587d9cbd4f65ba801617ba61300364dac7a766bcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.865740 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae48c9e0-ebbd-4c8e-9c54-f6b3ac967d34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.876795 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.876862 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.877526 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.877582 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.877601 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:17Z","lastTransitionTime":"2026-01-07T09:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.887066 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2562cbe-7a5f-44ee-ab23-4c3c8713b3c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8a53522c69210792aee2dce5b7e8e34b2cf22c24393e063a59b465373ab5096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e073b8c65df9f45d38018d244c88e515556561a3c3feb4a2cf3d270c77064b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ffbff8be21e181dfc3205fb877325fee8beefff7ba32e422a2619e1ab9772a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.896248 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.903930 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.912477 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.920659 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.929434 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.934756 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mrsjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b094e1e2-9ae5-4cf3-9cef-71c25224af2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgbqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mrsjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.943019 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-wcqw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ec74912c138c89ccad68970857fdf60edfad26d60a9ecc7be033be6f8349b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:51:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pf4gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wcqw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.958238 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592342ad-cf5f-4290-aa15-e99a6454cbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78wtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpj7m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.966377 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf037731-32c8-4638-9ee7-13bdb0c68279\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45412eb529aa08671402f5e439a2d0258d5e438466b13a1a3a8264e3eb9c8407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b059fcd2d184beda447aba3f6a320cb6d3f0c1bc3061fc47b9020d4c03f4a020\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.977318 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a8b62c-1e16-4bf4-8a1a-7e21eea28a36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-07T09:50:19Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0107 09:50:18.874623 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0107 09:50:18.874740 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0107 09:50:18.875448 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194057718/tls.crt::/tmp/serving-cert-1194057718/tls.key\\\\\\\"\\\\nI0107 09:50:19.352672 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0107 09:50:19.355791 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0107 09:50:19.355824 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0107 09:50:19.355916 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0107 09:50:19.355934 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0107 09:50:19.362427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0107 09:50:19.362473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0107 09:50:19.362471 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0107 09:50:19.362482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0107 09:50:19.362512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0107 09:50:19.362519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0107 09:50:19.362527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0107 09:50:19.362533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0107 09:50:19.364774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-07T09:50:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:50:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.979441 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.979487 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.979499 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.979515 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.979527 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:17Z","lastTransitionTime":"2026-01-07T09:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.987125 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:17 crc kubenswrapper[5131]: I0107 09:51:17.994267 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cj94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdv7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cj94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.002335 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3942e752-44ba-4678-8723-6cd778e60d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g97xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvdrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.009532 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad935b69-bef7-46a2-a03a-367404c13329\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9czf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-n4kr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.027525 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-mb6rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e402924-308a-4d47-8bf8-24a147d5f8bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://7de2783b949cae55fd91f3318b8de983905e30eb006a314a7e6d42ee6f3e7df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:51:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr9m5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mb6rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.047919 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84a14d49-a62a-496d-9134-f47c75840988\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://bcc1f440c98d635bb4817103fd1d9a17926b7a874f95ff484233a874c8eadeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a7484877b854cc26fd09edc6fd5c32934c1dffbbe432bfe7aff19ab695ef69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://75bb0f73ec339c2c6734121cc7d17e1fc680fd5202133c971e39ab46778e5714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bbdfb2d1ed90a5108986f54b916f1abbd45a3bae0271525826521f154c84eb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d3a5a8dfdfcd032674d1e587d9cbd4f65ba801617ba61300364dac7a766bcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a293f775ea339fa870889624391ae039158ac4544d88b6f9c9d7c136e716a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://102d63810639c2cee7fa3e0fef9769b09374348e27bc61573718700039515aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://160fd415fae252c3639e426e9905fd01e6e8f42b4cbb66f8169427c602cc373f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.063236 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae48c9e0-ebbd-4c8e-9c54-f6b3ac967d34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3383385e15ea13116da82fca0263faac293829a1d334c3ab9c3e887d3df064f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a048a29a003bbae3bffb916e657c9b18246309ec82bcd1cf410f76e266ba25cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b7f67a6eae4396f64fdd42279b61c6411a1dd1ad3f4d92b483b4cf59ff1284c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.080206 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2562cbe-7a5f-44ee-ab23-4c3c8713b3c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c8a53522c69210792aee2dce5b7e8e34b2cf22c24393e063a59b465373ab5096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e073b8c65df9f45d38018d244c88e515556561a3c3feb4a2cf3d270c77064b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ffbff8be21e181dfc3205fb877325fee8beefff7ba32e422a2619e1ab9772a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:49:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1043d76beefe7dc0844f533476401d9ca57619ede4a2fa4b59df7c24ef674024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-07T09:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-07T09:49:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:49:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.080971 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.081125 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.081224 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.081328 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.081437 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:18Z","lastTransitionTime":"2026-01-07T09:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.094721 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.109228 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.127170 5131 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b188180-f777-4a12-845b-d19fd5853d85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-07T09:50:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865dddfbe060ddcb71042d177754b25d863c2a48a46859c2b8b5b918dbaca5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-07T09:51:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwlcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-07T09:50:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gbjvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.180081 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.180375 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:18 crc kubenswrapper[5131]: E0107 09:51:18.180388 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.180423 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:18 crc kubenswrapper[5131]: E0107 09:51:18.180591 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:18 crc kubenswrapper[5131]: E0107 09:51:18.180775 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.181134 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:18 crc kubenswrapper[5131]: E0107 09:51:18.181297 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.184087 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.184125 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.184155 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.184179 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.184196 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:18Z","lastTransitionTime":"2026-01-07T09:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.289725 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.289778 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.289796 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.289817 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.289855 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:18Z","lastTransitionTime":"2026-01-07T09:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.392614 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.392674 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.392691 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.392712 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.392727 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:18Z","lastTransitionTime":"2026-01-07T09:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.494816 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.494903 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.494919 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.494945 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.494960 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:18Z","lastTransitionTime":"2026-01-07T09:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.597692 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.597757 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.597775 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.597799 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.597818 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:18Z","lastTransitionTime":"2026-01-07T09:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.641360 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"7dd19d742d57971df5266fec0c66cb9e468958e90efb3092e7e52ddeb732ef66"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.641427 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"0d0e066d88a0cda0852dd66a4203d8d2927b3e5d77c7f9df2bc17d892e884ae4"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.643124 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"bf488139a23f331f8b3113e6013425493dbc0739c7b246ec9e9cd4f3b2a7360b"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.646762 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" event={"ID":"ad935b69-bef7-46a2-a03a-367404c13329","Type":"ContainerStarted","Data":"0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.646858 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" event={"ID":"ad935b69-bef7-46a2-a03a-367404c13329","Type":"ContainerStarted","Data":"d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.648462 5131 generic.go:358] "Generic (PLEG): container finished" podID="5b188180-f777-4a12-845b-d19fd5853d85" containerID="865dddfbe060ddcb71042d177754b25d863c2a48a46859c2b8b5b918dbaca5d2" exitCode=0 Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.648519 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" event={"ID":"5b188180-f777-4a12-845b-d19fd5853d85","Type":"ContainerDied","Data":"865dddfbe060ddcb71042d177754b25d863c2a48a46859c2b8b5b918dbaca5d2"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.691084 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.691062259 podStartE2EDuration="28.691062259s" podCreationTimestamp="2026-01-07 09:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:18.673394824 +0000 UTC m=+106.839696428" watchObservedRunningTime="2026-01-07 09:51:18.691062259 +0000 UTC m=+106.857363833" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.699326 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.699382 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.699395 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.699414 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.699426 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:18Z","lastTransitionTime":"2026-01-07T09:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.707614 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=28.707597795 podStartE2EDuration="28.707597795s" podCreationTimestamp="2026-01-07 09:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:18.690488294 +0000 UTC m=+106.856790228" watchObservedRunningTime="2026-01-07 09:51:18.707597795 +0000 UTC m=+106.873899359" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.765739 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mb6rx" podStartSLOduration=86.765718337 podStartE2EDuration="1m26.765718337s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:18.765571663 +0000 UTC m=+106.931873257" watchObservedRunningTime="2026-01-07 09:51:18.765718337 +0000 UTC m=+106.932019901" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.794176 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=28.794160862 podStartE2EDuration="28.794160862s" podCreationTimestamp="2026-01-07 09:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:18.792620714 +0000 UTC m=+106.958922308" watchObservedRunningTime="2026-01-07 09:51:18.794160862 +0000 UTC m=+106.960462426" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.806080 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.806130 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.806144 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.806161 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.806174 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:18Z","lastTransitionTime":"2026-01-07T09:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.825784 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=28.825764647 podStartE2EDuration="28.825764647s" podCreationTimestamp="2026-01-07 09:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:18.811399026 +0000 UTC m=+106.977700610" watchObservedRunningTime="2026-01-07 09:51:18.825764647 +0000 UTC m=+106.992066221" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.826273 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=28.82626742 podStartE2EDuration="28.82626742s" podCreationTimestamp="2026-01-07 09:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:18.825964362 +0000 UTC m=+106.992265926" watchObservedRunningTime="2026-01-07 09:51:18.82626742 +0000 UTC m=+106.992568994" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.907814 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.907887 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.907904 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.907927 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.907945 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:18Z","lastTransitionTime":"2026-01-07T09:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.935351 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-wcqw9" podStartSLOduration=86.935334304 podStartE2EDuration="1m26.935334304s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:18.935030256 +0000 UTC m=+107.101331840" watchObservedRunningTime="2026-01-07 09:51:18.935334304 +0000 UTC m=+107.101635868" Jan 07 09:51:18 crc kubenswrapper[5131]: I0107 09:51:18.970616 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" podStartSLOduration=86.970601141 podStartE2EDuration="1m26.970601141s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:18.97016008 +0000 UTC m=+107.136461654" watchObservedRunningTime="2026-01-07 09:51:18.970601141 +0000 UTC m=+107.136902705" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.010010 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.010071 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.010080 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.010093 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.010102 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:19Z","lastTransitionTime":"2026-01-07T09:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.112391 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.112438 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.112451 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.112469 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.112485 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:19Z","lastTransitionTime":"2026-01-07T09:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.217383 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.217425 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.217434 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.217446 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.217456 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:19Z","lastTransitionTime":"2026-01-07T09:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.319318 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.319366 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.319378 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.319393 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.319406 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:19Z","lastTransitionTime":"2026-01-07T09:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.422770 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.422867 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.422893 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.422922 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.422947 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:19Z","lastTransitionTime":"2026-01-07T09:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.525375 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.525412 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.525421 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.525435 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.525443 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:19Z","lastTransitionTime":"2026-01-07T09:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.626927 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.627134 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.627385 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.627499 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.627574 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:19Z","lastTransitionTime":"2026-01-07T09:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.654109 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mrsjt" event={"ID":"b094e1e2-9ae5-4cf3-9cef-71c25224af2a","Type":"ContainerStarted","Data":"57744c0fc973c0ed4681221220ee4a8ffc2663a227c55dc04cb0a95c84d88afd"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.656970 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"cc92638784d64b5e4d6cf39f3c6553c8e2cecff663813f4026c73fa3550c8f39"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.663421 5131 generic.go:358] "Generic (PLEG): container finished" podID="5b188180-f777-4a12-845b-d19fd5853d85" containerID="4ae1a55eb039c2e03f1ffd93802c2914aef6db44bb94152a5040a32386edca5e" exitCode=0 Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.663466 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" event={"ID":"5b188180-f777-4a12-845b-d19fd5853d85","Type":"ContainerDied","Data":"4ae1a55eb039c2e03f1ffd93802c2914aef6db44bb94152a5040a32386edca5e"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.671500 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-mrsjt" podStartSLOduration=87.671472653 podStartE2EDuration="1m27.671472653s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:19.671179806 +0000 UTC m=+107.837481410" watchObservedRunningTime="2026-01-07 09:51:19.671472653 +0000 UTC m=+107.837774247" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.730063 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.730160 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.730181 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.730237 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.730257 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:19Z","lastTransitionTime":"2026-01-07T09:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.833680 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.833767 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.833795 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.833827 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.833911 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:19Z","lastTransitionTime":"2026-01-07T09:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.936609 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.936640 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.936648 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.936661 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:19 crc kubenswrapper[5131]: I0107 09:51:19.936688 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:19Z","lastTransitionTime":"2026-01-07T09:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.038884 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.038927 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.038942 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.038959 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.038971 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:20Z","lastTransitionTime":"2026-01-07T09:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.140564 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.140679 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.140699 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.140726 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.140744 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:20Z","lastTransitionTime":"2026-01-07T09:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.179381 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.179398 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.179716 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:20 crc kubenswrapper[5131]: E0107 09:51:20.179846 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.180117 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:20 crc kubenswrapper[5131]: E0107 09:51:20.180191 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:20 crc kubenswrapper[5131]: E0107 09:51:20.180280 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:20 crc kubenswrapper[5131]: E0107 09:51:20.180346 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.243331 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.243363 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.243371 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.243385 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.243394 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:20Z","lastTransitionTime":"2026-01-07T09:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.345820 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.345953 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.345965 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.345982 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.345994 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:20Z","lastTransitionTime":"2026-01-07T09:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.447687 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.447725 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.447737 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.447754 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.447768 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:20Z","lastTransitionTime":"2026-01-07T09:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.549409 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.549458 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.549476 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.549499 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.549517 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:20Z","lastTransitionTime":"2026-01-07T09:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.652124 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.652531 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.652553 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.652579 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.652596 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:20Z","lastTransitionTime":"2026-01-07T09:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.670250 5131 generic.go:358] "Generic (PLEG): container finished" podID="5b188180-f777-4a12-845b-d19fd5853d85" containerID="8a075c158ac88a76c80525b5291a2d6529ac9aaeb5f3d328264c14086cab77f4" exitCode=0 Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.670315 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" event={"ID":"5b188180-f777-4a12-845b-d19fd5853d85","Type":"ContainerDied","Data":"8a075c158ac88a76c80525b5291a2d6529ac9aaeb5f3d328264c14086cab77f4"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.673560 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"7c4cf0293fa1a71dcb61b14e25966ac0e03d50830e418eb3b0ef52d64dff8e8a"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.673608 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"903008c51d00a0d816920831c3581e75cc8a3222da74d38c39c99f7e621c1add"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.675770 5131 generic.go:358] "Generic (PLEG): container finished" podID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerID="aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535" exitCode=0 Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.675822 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerDied","Data":"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.726152 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podStartSLOduration=88.726124746 podStartE2EDuration="1m28.726124746s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:20.724924195 +0000 UTC m=+108.891225769" watchObservedRunningTime="2026-01-07 09:51:20.726124746 +0000 UTC m=+108.892426350" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.760111 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.760154 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.760165 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.760185 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.760199 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:20Z","lastTransitionTime":"2026-01-07T09:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.863380 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.863423 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.863436 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.863452 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.863492 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:20Z","lastTransitionTime":"2026-01-07T09:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.964764 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.964808 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.964821 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.964854 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:20 crc kubenswrapper[5131]: I0107 09:51:20.964864 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:20Z","lastTransitionTime":"2026-01-07T09:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.066757 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.066878 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.066907 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.066938 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.066963 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:21Z","lastTransitionTime":"2026-01-07T09:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.171674 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.171789 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.171876 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.171916 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.171988 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:21Z","lastTransitionTime":"2026-01-07T09:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.274874 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.275208 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.275220 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.275340 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.275355 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:21Z","lastTransitionTime":"2026-01-07T09:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.377497 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.377550 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.377567 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.377593 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.377611 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:21Z","lastTransitionTime":"2026-01-07T09:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.480941 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.481314 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.481334 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.481362 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.481383 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:21Z","lastTransitionTime":"2026-01-07T09:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.583420 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.583486 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.583505 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.583538 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.583557 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:21Z","lastTransitionTime":"2026-01-07T09:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.680827 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerStarted","Data":"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.680924 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerStarted","Data":"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.680944 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerStarted","Data":"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.683631 5131 generic.go:358] "Generic (PLEG): container finished" podID="5b188180-f777-4a12-845b-d19fd5853d85" containerID="4a38653313837fcf46bec8bb855eabd45a685e341199f2515f15b2754b9874a7" exitCode=0 Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.683698 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" event={"ID":"5b188180-f777-4a12-845b-d19fd5853d85","Type":"ContainerDied","Data":"4a38653313837fcf46bec8bb855eabd45a685e341199f2515f15b2754b9874a7"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.685764 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.685957 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.685972 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.685985 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.685997 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:21Z","lastTransitionTime":"2026-01-07T09:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.789639 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.789678 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.789689 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.789702 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.789712 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:21Z","lastTransitionTime":"2026-01-07T09:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.891946 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.891990 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.892000 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.892014 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.892024 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:21Z","lastTransitionTime":"2026-01-07T09:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.995152 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.995199 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.995212 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.995230 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:21 crc kubenswrapper[5131]: I0107 09:51:21.995241 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:21Z","lastTransitionTime":"2026-01-07T09:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.097156 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.097209 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.097226 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.097246 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.097260 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:22Z","lastTransitionTime":"2026-01-07T09:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.149691 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.149789 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.149815 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.149851 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.149873 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.149850523 +0000 UTC m=+142.316152097 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.149904 5131 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.149917 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.149943 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.149935455 +0000 UTC m=+142.316237009 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.150126 5131 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.150167 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.150157421 +0000 UTC m=+142.316458995 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.150176 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.150215 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.150292 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.150318 5131 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.150229 5131 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.150372 5131 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.150445 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.150404187 +0000 UTC m=+142.316705791 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.150482 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.150467388 +0000 UTC m=+142.316768992 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.186326 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.186509 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.186619 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.186743 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.186895 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.187042 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.187181 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.187331 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.199695 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.199762 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.199787 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.199816 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.199865 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:22Z","lastTransitionTime":"2026-01-07T09:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.251031 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.251303 5131 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:51:22 crc kubenswrapper[5131]: E0107 09:51:22.251479 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs podName:ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.251440488 +0000 UTC m=+142.417742122 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs") pod "network-metrics-daemon-5cj94" (UID: "ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.302031 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.302124 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.302151 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.302189 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.302208 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:22Z","lastTransitionTime":"2026-01-07T09:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.404654 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.404709 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.404730 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.404754 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.404772 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:22Z","lastTransitionTime":"2026-01-07T09:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.508496 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.508563 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.508581 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.508605 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.508622 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:22Z","lastTransitionTime":"2026-01-07T09:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.610629 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.610692 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.610712 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.610738 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.610758 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:22Z","lastTransitionTime":"2026-01-07T09:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.694485 5131 generic.go:358] "Generic (PLEG): container finished" podID="5b188180-f777-4a12-845b-d19fd5853d85" containerID="c5ebf47b0e648c5c8c251e7b8ba618d327712df2e06595ef8f0dcefb0b72580d" exitCode=0 Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.694586 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" event={"ID":"5b188180-f777-4a12-845b-d19fd5853d85","Type":"ContainerDied","Data":"c5ebf47b0e648c5c8c251e7b8ba618d327712df2e06595ef8f0dcefb0b72580d"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.701359 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerStarted","Data":"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.701424 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerStarted","Data":"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.701442 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerStarted","Data":"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.713875 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.713934 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.713953 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.713979 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.713999 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:22Z","lastTransitionTime":"2026-01-07T09:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.816290 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.816366 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.816388 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.816416 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.816437 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:22Z","lastTransitionTime":"2026-01-07T09:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.918470 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.918527 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.918540 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.918559 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:22 crc kubenswrapper[5131]: I0107 09:51:22.918571 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:22Z","lastTransitionTime":"2026-01-07T09:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.021863 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.021942 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.021967 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.022001 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.022024 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:23Z","lastTransitionTime":"2026-01-07T09:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.124439 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.124745 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.124758 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.124775 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.124787 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:23Z","lastTransitionTime":"2026-01-07T09:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.226756 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.226820 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.226852 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.226884 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.226905 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:23Z","lastTransitionTime":"2026-01-07T09:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.329646 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.329703 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.329716 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.329734 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.329748 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:23Z","lastTransitionTime":"2026-01-07T09:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.431889 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.431939 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.431947 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.431961 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.431971 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:23Z","lastTransitionTime":"2026-01-07T09:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.534388 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.534429 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.534442 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.534459 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.534472 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:23Z","lastTransitionTime":"2026-01-07T09:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.637033 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.637076 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.637090 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.637107 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.637119 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:23Z","lastTransitionTime":"2026-01-07T09:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.710110 5131 generic.go:358] "Generic (PLEG): container finished" podID="5b188180-f777-4a12-845b-d19fd5853d85" containerID="22c3271a7511b52f0c466328b10f6f6ec305e72a6e71b18817078669ab434175" exitCode=0 Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.710197 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" event={"ID":"5b188180-f777-4a12-845b-d19fd5853d85","Type":"ContainerDied","Data":"22c3271a7511b52f0c466328b10f6f6ec305e72a6e71b18817078669ab434175"} Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.739244 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.739280 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.739292 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.739309 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.739321 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:23Z","lastTransitionTime":"2026-01-07T09:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.841994 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.842067 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.842092 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.842142 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.842207 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:23Z","lastTransitionTime":"2026-01-07T09:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.945765 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.945810 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.945819 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.945849 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:23 crc kubenswrapper[5131]: I0107 09:51:23.945861 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:23Z","lastTransitionTime":"2026-01-07T09:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.048539 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.048585 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.048602 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.048626 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.048644 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:24Z","lastTransitionTime":"2026-01-07T09:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.151398 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.151479 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.151505 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.151536 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.151560 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:24Z","lastTransitionTime":"2026-01-07T09:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.180161 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:24 crc kubenswrapper[5131]: E0107 09:51:24.180380 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.180473 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.180701 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.180767 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:24 crc kubenswrapper[5131]: E0107 09:51:24.180701 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:24 crc kubenswrapper[5131]: E0107 09:51:24.180943 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:24 crc kubenswrapper[5131]: E0107 09:51:24.181120 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.254486 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.254534 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.254552 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.254576 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.254594 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:24Z","lastTransitionTime":"2026-01-07T09:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.357720 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.358514 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.358763 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.358983 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.359172 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:24Z","lastTransitionTime":"2026-01-07T09:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.461709 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.461772 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.461789 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.461827 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.461867 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:24Z","lastTransitionTime":"2026-01-07T09:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.564081 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.564326 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.564402 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.564478 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.564551 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:24Z","lastTransitionTime":"2026-01-07T09:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.667030 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.667068 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.667083 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.667101 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.667117 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:24Z","lastTransitionTime":"2026-01-07T09:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.718174 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" event={"ID":"5b188180-f777-4a12-845b-d19fd5853d85","Type":"ContainerStarted","Data":"e796926ccb5a1b5e5aa5ef04f0cf2905403e809023adc2ba740d7562cf1f2966"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.726358 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerStarted","Data":"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.753722 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gbjvz" podStartSLOduration=92.753693568 podStartE2EDuration="1m32.753693568s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:24.752668202 +0000 UTC m=+112.918969806" watchObservedRunningTime="2026-01-07 09:51:24.753693568 +0000 UTC m=+112.919995162" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.769629 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.769995 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.770228 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.770488 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.770698 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:24Z","lastTransitionTime":"2026-01-07T09:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.826444 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.826949 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.827185 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.827439 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.827649 5131 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-07T09:51:24Z","lastTransitionTime":"2026-01-07T09:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.891309 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw"] Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.894546 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.896499 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.896694 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.897411 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 07 09:51:24 crc kubenswrapper[5131]: I0107 09:51:24.897682 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.088783 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e378140-5d82-442b-bace-f9455dba4854-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.088896 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e378140-5d82-442b-bace-f9455dba4854-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.088999 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6e378140-5d82-442b-bace-f9455dba4854-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.089045 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6e378140-5d82-442b-bace-f9455dba4854-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.089135 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e378140-5d82-442b-bace-f9455dba4854-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.190334 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e378140-5d82-442b-bace-f9455dba4854-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.190541 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e378140-5d82-442b-bace-f9455dba4854-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.190610 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e378140-5d82-442b-bace-f9455dba4854-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.190698 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6e378140-5d82-442b-bace-f9455dba4854-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.190745 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6e378140-5d82-442b-bace-f9455dba4854-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.190892 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6e378140-5d82-442b-bace-f9455dba4854-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.190984 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6e378140-5d82-442b-bace-f9455dba4854-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.192224 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e378140-5d82-442b-bace-f9455dba4854-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.200372 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e378140-5d82-442b-bace-f9455dba4854-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.222944 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e378140-5d82-442b-bace-f9455dba4854-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-mdmpw\" (UID: \"6e378140-5d82-442b-bace-f9455dba4854\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.247730 5131 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.257738 5131 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.519257 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" Jan 07 09:51:25 crc kubenswrapper[5131]: W0107 09:51:25.539851 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e378140_5d82_442b_bace_f9455dba4854.slice/crio-a6281b770467a6187533d412f8b94d567325064ed442024ca5a05143cafc2813 WatchSource:0}: Error finding container a6281b770467a6187533d412f8b94d567325064ed442024ca5a05143cafc2813: Status 404 returned error can't find the container with id a6281b770467a6187533d412f8b94d567325064ed442024ca5a05143cafc2813 Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.732744 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" event={"ID":"6e378140-5d82-442b-bace-f9455dba4854","Type":"ContainerStarted","Data":"ea2bdc60c50f712e495a1839b5d53f5ae6c892c3e1934f09b8282930c302e6f8"} Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.733260 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" event={"ID":"6e378140-5d82-442b-bace-f9455dba4854","Type":"ContainerStarted","Data":"a6281b770467a6187533d412f8b94d567325064ed442024ca5a05143cafc2813"} Jan 07 09:51:25 crc kubenswrapper[5131]: I0107 09:51:25.753525 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-mdmpw" podStartSLOduration=93.753497021 podStartE2EDuration="1m33.753497021s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:25.752854835 +0000 UTC m=+113.919156439" watchObservedRunningTime="2026-01-07 09:51:25.753497021 +0000 UTC m=+113.919798665" Jan 07 09:51:26 crc kubenswrapper[5131]: I0107 09:51:26.188424 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:26 crc kubenswrapper[5131]: E0107 09:51:26.188713 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:26 crc kubenswrapper[5131]: I0107 09:51:26.188936 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:26 crc kubenswrapper[5131]: E0107 09:51:26.189170 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:26 crc kubenswrapper[5131]: I0107 09:51:26.189324 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:26 crc kubenswrapper[5131]: E0107 09:51:26.189493 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:26 crc kubenswrapper[5131]: I0107 09:51:26.189622 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:26 crc kubenswrapper[5131]: E0107 09:51:26.189786 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:26 crc kubenswrapper[5131]: I0107 09:51:26.742097 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerStarted","Data":"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5"} Jan 07 09:51:26 crc kubenswrapper[5131]: I0107 09:51:26.742627 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:51:26 crc kubenswrapper[5131]: I0107 09:51:26.742685 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:51:26 crc kubenswrapper[5131]: I0107 09:51:26.791030 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podStartSLOduration=94.791006421 podStartE2EDuration="1m34.791006421s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:26.789321798 +0000 UTC m=+114.955623432" watchObservedRunningTime="2026-01-07 09:51:26.791006421 +0000 UTC m=+114.957308015" Jan 07 09:51:26 crc kubenswrapper[5131]: I0107 09:51:26.824082 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:51:27 crc kubenswrapper[5131]: I0107 09:51:27.745976 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:51:27 crc kubenswrapper[5131]: I0107 09:51:27.785773 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:51:28 crc kubenswrapper[5131]: I0107 09:51:28.179624 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:28 crc kubenswrapper[5131]: I0107 09:51:28.179699 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:28 crc kubenswrapper[5131]: I0107 09:51:28.179647 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:28 crc kubenswrapper[5131]: E0107 09:51:28.179803 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:28 crc kubenswrapper[5131]: I0107 09:51:28.179688 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:28 crc kubenswrapper[5131]: E0107 09:51:28.179961 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:28 crc kubenswrapper[5131]: E0107 09:51:28.180169 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:28 crc kubenswrapper[5131]: E0107 09:51:28.180300 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:28 crc kubenswrapper[5131]: I0107 09:51:28.663414 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5cj94"] Jan 07 09:51:28 crc kubenswrapper[5131]: I0107 09:51:28.748113 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:28 crc kubenswrapper[5131]: E0107 09:51:28.748325 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:30 crc kubenswrapper[5131]: I0107 09:51:30.180444 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:30 crc kubenswrapper[5131]: I0107 09:51:30.180444 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:30 crc kubenswrapper[5131]: E0107 09:51:30.180668 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:30 crc kubenswrapper[5131]: I0107 09:51:30.180691 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:30 crc kubenswrapper[5131]: E0107 09:51:30.180956 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:30 crc kubenswrapper[5131]: E0107 09:51:30.181115 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:30 crc kubenswrapper[5131]: I0107 09:51:30.181148 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:30 crc kubenswrapper[5131]: E0107 09:51:30.181303 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:32 crc kubenswrapper[5131]: E0107 09:51:32.179229 5131 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Jan 07 09:51:32 crc kubenswrapper[5131]: I0107 09:51:32.184384 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:32 crc kubenswrapper[5131]: I0107 09:51:32.184550 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:32 crc kubenswrapper[5131]: I0107 09:51:32.184647 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:32 crc kubenswrapper[5131]: E0107 09:51:32.184642 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:32 crc kubenswrapper[5131]: E0107 09:51:32.184891 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:32 crc kubenswrapper[5131]: I0107 09:51:32.184965 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:32 crc kubenswrapper[5131]: E0107 09:51:32.185259 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:32 crc kubenswrapper[5131]: E0107 09:51:32.185437 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:32 crc kubenswrapper[5131]: E0107 09:51:32.287869 5131 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 07 09:51:34 crc kubenswrapper[5131]: I0107 09:51:34.180146 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:34 crc kubenswrapper[5131]: I0107 09:51:34.180343 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:34 crc kubenswrapper[5131]: E0107 09:51:34.180724 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:34 crc kubenswrapper[5131]: I0107 09:51:34.180415 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:34 crc kubenswrapper[5131]: I0107 09:51:34.180371 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:34 crc kubenswrapper[5131]: E0107 09:51:34.180975 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:34 crc kubenswrapper[5131]: E0107 09:51:34.181108 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:34 crc kubenswrapper[5131]: E0107 09:51:34.181220 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:36 crc kubenswrapper[5131]: I0107 09:51:36.179538 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:36 crc kubenswrapper[5131]: E0107 09:51:36.179752 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 07 09:51:36 crc kubenswrapper[5131]: I0107 09:51:36.179892 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:36 crc kubenswrapper[5131]: I0107 09:51:36.180000 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:36 crc kubenswrapper[5131]: E0107 09:51:36.180128 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 07 09:51:36 crc kubenswrapper[5131]: I0107 09:51:36.180231 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:36 crc kubenswrapper[5131]: E0107 09:51:36.180413 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 07 09:51:36 crc kubenswrapper[5131]: E0107 09:51:36.180504 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cj94" podUID="ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e" Jan 07 09:51:38 crc kubenswrapper[5131]: I0107 09:51:38.179711 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:38 crc kubenswrapper[5131]: I0107 09:51:38.179771 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:38 crc kubenswrapper[5131]: I0107 09:51:38.180451 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:38 crc kubenswrapper[5131]: I0107 09:51:38.180771 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:38 crc kubenswrapper[5131]: I0107 09:51:38.183141 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 07 09:51:38 crc kubenswrapper[5131]: I0107 09:51:38.184528 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 07 09:51:38 crc kubenswrapper[5131]: I0107 09:51:38.185177 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 07 09:51:38 crc kubenswrapper[5131]: I0107 09:51:38.186167 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 07 09:51:38 crc kubenswrapper[5131]: I0107 09:51:38.186360 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 07 09:51:38 crc kubenswrapper[5131]: I0107 09:51:38.186529 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.205701 5131 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.259338 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-k5x25"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.569278 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-cw4c4"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.569477 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.574592 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pssml"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.574762 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.576866 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.577807 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.578141 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.578169 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.578907 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.579151 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.579337 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.579393 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.579426 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.579965 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.580161 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.580231 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.580360 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.580452 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.580539 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.580705 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.588211 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.588375 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.592258 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.592423 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.596584 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.597119 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.602889 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.602985 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.603340 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-xvbzj"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.603559 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.607551 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-flx6z"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.608047 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.617133 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.617559 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.618399 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.619152 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.620866 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.621554 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.622103 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-bc9f4"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.623440 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.623901 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.624951 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.630016 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.630181 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.630238 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.631698 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.631908 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.633144 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.633730 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.634853 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.635140 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.635331 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.636013 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.636040 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.636147 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.637423 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.637534 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.638297 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.638197 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.641460 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.642208 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.642434 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.642589 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.642863 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.643045 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.643236 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.643272 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.643396 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.643473 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.644485 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.645106 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.647825 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.647921 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.648877 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.649077 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.649249 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.649369 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.649538 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.649868 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.649916 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-console-config\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.649928 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.649978 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f9f2345-5823-4288-ad4b-e49b1088cba4-etcd-client\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.649933 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650034 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4f9f2345-5823-4288-ad4b-e49b1088cba4-encryption-config\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650115 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6497dc94-29dd-4d24-8a87-6721b752e8d3-serving-cert\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650153 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fptn\" (UniqueName: \"kubernetes.io/projected/3c82fced-e466-4e52-8d61-b62e172d3ea9-kube-api-access-5fptn\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650186 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c82fced-e466-4e52-8d61-b62e172d3ea9-images\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650242 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-client-ca\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650279 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650422 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c82fced-e466-4e52-8d61-b62e172d3ea9-config\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650513 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650586 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-console-oauth-config\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650664 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-oauth-serving-cert\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650692 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-config\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650712 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898c6\" (UniqueName: \"kubernetes.io/projected/b56ad19c-62aa-42e5-bdce-bd890317e4da-kube-api-access-898c6\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650750 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-config\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650768 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5116ec44-994d-4f27-872b-09ada0a94b73-config\") pod \"openshift-apiserver-operator-846cbfc458-vm2vl\" (UID: \"5116ec44-994d-4f27-872b-09ada0a94b73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650805 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9f2345-5823-4288-ad4b-e49b1088cba4-serving-cert\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650908 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.650971 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqzzf\" (UniqueName: \"kubernetes.io/projected/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-kube-api-access-nqzzf\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651019 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30454542-4fc6-4b3b-8917-13b0898bdc75-config\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651073 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30454542-4fc6-4b3b-8917-13b0898bdc75-auth-proxy-config\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651115 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/30454542-4fc6-4b3b-8917-13b0898bdc75-machine-approver-tls\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651205 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-trusted-ca-bundle\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651257 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f9f2345-5823-4288-ad4b-e49b1088cba4-node-pullsecrets\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651380 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c82fced-e466-4e52-8d61-b62e172d3ea9-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651436 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-audit\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651476 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-console-serving-cert\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651508 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvsh5\" (UniqueName: \"kubernetes.io/projected/5116ec44-994d-4f27-872b-09ada0a94b73-kube-api-access-zvsh5\") pod \"openshift-apiserver-operator-846cbfc458-vm2vl\" (UID: \"5116ec44-994d-4f27-872b-09ada0a94b73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651536 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651563 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-config\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651696 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b56ad19c-62aa-42e5-bdce-bd890317e4da-config\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651764 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gj97\" (UniqueName: \"kubernetes.io/projected/6497dc94-29dd-4d24-8a87-6721b752e8d3-kube-api-access-5gj97\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651828 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b56ad19c-62aa-42e5-bdce-bd890317e4da-serving-cert\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.651901 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfb67\" (UniqueName: \"kubernetes.io/projected/4f9f2345-5823-4288-ad4b-e49b1088cba4-kube-api-access-qfb67\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.652211 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df9wz\" (UniqueName: \"kubernetes.io/projected/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-kube-api-access-df9wz\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.652564 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.653263 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6497dc94-29dd-4d24-8a87-6721b752e8d3-tmp\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.653720 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5116ec44-994d-4f27-872b-09ada0a94b73-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-vm2vl\" (UID: \"5116ec44-994d-4f27-872b-09ada0a94b73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.653787 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-service-ca\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.653860 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-serving-cert\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.653906 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b56ad19c-62aa-42e5-bdce-bd890317e4da-trusted-ca\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.654040 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5px8d\" (UniqueName: \"kubernetes.io/projected/30454542-4fc6-4b3b-8917-13b0898bdc75-kube-api-access-5px8d\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.654096 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f9f2345-5823-4288-ad4b-e49b1088cba4-audit-dir\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.654131 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-image-import-ca\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.655948 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.661993 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.662644 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.663991 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.664342 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.665768 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.666099 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.666208 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.666639 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.667916 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.668347 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.669964 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-mh9sm"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.670130 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.672788 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cnl99"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.673063 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.675414 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-vdsqq"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.675549 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.680225 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.680493 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.680502 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.680623 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.680726 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.681425 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.681570 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.681898 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.681994 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-vdsqq" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.685284 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.686310 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.687498 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.687542 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.687661 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.687765 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.687658 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.687938 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.689634 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.690127 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.694651 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.694959 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.695122 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.695223 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.695281 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.695449 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.695470 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.696147 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.696619 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.697130 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.697542 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.697630 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.699092 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.700355 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.700811 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.700799 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.701059 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.701734 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.708341 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.708518 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.711875 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.712206 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.712253 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.714888 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-gqz76"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.715033 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.717370 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.717498 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.721612 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.721761 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.724200 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.724476 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.726419 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.726532 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.729710 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.729788 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.731785 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.736140 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.736247 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.739520 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mrfk7"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.739619 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.743875 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.744007 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.747258 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.747442 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.752536 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-fgq7r"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.753473 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.754952 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c82fced-e466-4e52-8d61-b62e172d3ea9-images\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.754996 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-client-ca\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.755027 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-tls\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.755540 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.755783 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c82fced-e466-4e52-8d61-b62e172d3ea9-config\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.755852 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.755902 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-console-oauth-config\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.755937 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-bound-sa-token\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.756196 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-client-ca\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.756503 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c82fced-e466-4e52-8d61-b62e172d3ea9-images\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.756790 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-oauth-serving-cert\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.756960 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-config\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.757168 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-898c6\" (UniqueName: \"kubernetes.io/projected/b56ad19c-62aa-42e5-bdce-bd890317e4da-kube-api-access-898c6\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.756983 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.760221 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-config\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.760280 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5116ec44-994d-4f27-872b-09ada0a94b73-config\") pod \"openshift-apiserver-operator-846cbfc458-vm2vl\" (UID: \"5116ec44-994d-4f27-872b-09ada0a94b73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.760341 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9f2345-5823-4288-ad4b-e49b1088cba4-serving-cert\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.760391 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.760397 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c82fced-e466-4e52-8d61-b62e172d3ea9-config\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.760422 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nqzzf\" (UniqueName: \"kubernetes.io/projected/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-kube-api-access-nqzzf\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.760580 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30454542-4fc6-4b3b-8917-13b0898bdc75-config\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.760792 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-config\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.760929 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-oauth-serving-cert\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.761324 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-config\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.760792 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-certificates\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.764347 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e92757e-cc25-48a6-a774-5c2a8a281576-ca-trust-extracted\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.761940 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.762643 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.763081 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30454542-4fc6-4b3b-8917-13b0898bdc75-config\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.763186 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.764731 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-client-ca\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.765116 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30454542-4fc6-4b3b-8917-13b0898bdc75-auth-proxy-config\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.765143 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/30454542-4fc6-4b3b-8917-13b0898bdc75-machine-approver-tls\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.765172 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-trusted-ca-bundle\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.765194 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f9f2345-5823-4288-ad4b-e49b1088cba4-node-pullsecrets\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.762308 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5116ec44-994d-4f27-872b-09ada0a94b73-config\") pod \"openshift-apiserver-operator-846cbfc458-vm2vl\" (UID: \"5116ec44-994d-4f27-872b-09ada0a94b73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.765417 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.765617 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c82fced-e466-4e52-8d61-b62e172d3ea9-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.765861 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-audit\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.766044 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30454542-4fc6-4b3b-8917-13b0898bdc75-auth-proxy-config\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: E0107 09:51:45.766571 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:46.266557189 +0000 UTC m=+134.432858753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.766724 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-sftp2"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.766809 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-audit\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.767169 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-fgq7r" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.767595 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9f2345-5823-4288-ad4b-e49b1088cba4-serving-cert\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.767974 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-console-oauth-config\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.768045 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-console-serving-cert\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.768083 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zvsh5\" (UniqueName: \"kubernetes.io/projected/5116ec44-994d-4f27-872b-09ada0a94b73-kube-api-access-zvsh5\") pod \"openshift-apiserver-operator-846cbfc458-vm2vl\" (UID: \"5116ec44-994d-4f27-872b-09ada0a94b73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.768155 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e92757e-cc25-48a6-a774-5c2a8a281576-installation-pull-secrets\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.768206 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv8dz\" (UniqueName: \"kubernetes.io/projected/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-kube-api-access-sv8dz\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.768356 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f9f2345-5823-4288-ad4b-e49b1088cba4-node-pullsecrets\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.768445 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-config\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.768525 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhb7r\" (UniqueName: \"kubernetes.io/projected/0fe1be72-61f5-4433-908d-225206c4c7a1-kube-api-access-bhb7r\") pod \"cluster-samples-operator-6b564684c8-4ktsk\" (UID: \"0fe1be72-61f5-4433-908d-225206c4c7a1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.769925 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-config\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.770035 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b56ad19c-62aa-42e5-bdce-bd890317e4da-config\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.770084 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5gj97\" (UniqueName: \"kubernetes.io/projected/6497dc94-29dd-4d24-8a87-6721b752e8d3-kube-api-access-5gj97\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.770049 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-trusted-ca-bundle\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.770354 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b56ad19c-62aa-42e5-bdce-bd890317e4da-serving-cert\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.770535 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qfb67\" (UniqueName: \"kubernetes.io/projected/4f9f2345-5823-4288-ad4b-e49b1088cba4-kube-api-access-qfb67\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.771002 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-df9wz\" (UniqueName: \"kubernetes.io/projected/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-kube-api-access-df9wz\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.771117 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.771202 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6497dc94-29dd-4d24-8a87-6721b752e8d3-tmp\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.771294 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5116ec44-994d-4f27-872b-09ada0a94b73-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-vm2vl\" (UID: \"5116ec44-994d-4f27-872b-09ada0a94b73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.771384 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0fe1be72-61f5-4433-908d-225206c4c7a1-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-4ktsk\" (UID: \"0fe1be72-61f5-4433-908d-225206c4c7a1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.771864 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-service-ca\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.771956 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-serving-cert\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.772358 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.771082 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b56ad19c-62aa-42e5-bdce-bd890317e4da-config\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.773053 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c82fced-e466-4e52-8d61-b62e172d3ea9-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.773175 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-serving-cert\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.773230 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-console-serving-cert\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.773823 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-config\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.773978 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b56ad19c-62aa-42e5-bdce-bd890317e4da-trusted-ca\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.774058 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6497dc94-29dd-4d24-8a87-6721b752e8d3-tmp\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.774180 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5px8d\" (UniqueName: \"kubernetes.io/projected/30454542-4fc6-4b3b-8917-13b0898bdc75-kube-api-access-5px8d\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.774313 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f9f2345-5823-4288-ad4b-e49b1088cba4-audit-dir\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.774379 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f9f2345-5823-4288-ad4b-e49b1088cba4-audit-dir\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.774202 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.774553 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-image-import-ca\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.774645 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.774745 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-console-config\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.774853 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f9f2345-5823-4288-ad4b-e49b1088cba4-etcd-client\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.774950 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkr9d\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-kube-api-access-jkr9d\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.775045 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-tmp\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.775152 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4f9f2345-5823-4288-ad4b-e49b1088cba4-encryption-config\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.775243 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6497dc94-29dd-4d24-8a87-6721b752e8d3-serving-cert\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.775333 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-trusted-ca\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.775430 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4f9f2345-5823-4288-ad4b-e49b1088cba4-image-import-ca\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.775438 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b56ad19c-62aa-42e5-bdce-bd890317e4da-trusted-ca\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.775533 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.775737 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.775864 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-console-config\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.775437 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5fptn\" (UniqueName: \"kubernetes.io/projected/3c82fced-e466-4e52-8d61-b62e172d3ea9-kube-api-access-5fptn\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.776665 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/30454542-4fc6-4b3b-8917-13b0898bdc75-machine-approver-tls\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.777042 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b56ad19c-62aa-42e5-bdce-bd890317e4da-serving-cert\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.777439 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-service-ca\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.778452 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6497dc94-29dd-4d24-8a87-6721b752e8d3-serving-cert\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.778764 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-serving-cert\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.779093 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5116ec44-994d-4f27-872b-09ada0a94b73-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-vm2vl\" (UID: \"5116ec44-994d-4f27-872b-09ada0a94b73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.779327 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f9f2345-5823-4288-ad4b-e49b1088cba4-etcd-client\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.779602 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4f9f2345-5823-4288-ad4b-e49b1088cba4-encryption-config\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.783401 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.783604 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.787183 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.787378 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.790122 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-z4875"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.790311 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.792366 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.793111 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-k5x25"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.793137 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pssml"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.793148 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.793161 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-b48tj"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.793357 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.795889 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-cw4c4"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.795913 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.795924 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-bc9f4"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.795936 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-mh9sm"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.795950 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-vdsqq"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.795960 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-xvbzj"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.795972 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.795984 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.795994 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.796006 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mrfk7"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.796018 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-h4w78"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.796393 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-b48tj" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.798948 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-h884s"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.799151 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.801211 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7cl88"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.801887 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h884s" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.805559 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-grvm4"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.805695 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808516 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808540 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808553 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808566 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808576 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808586 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cnl99"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808596 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808605 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-gqz76"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808617 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808627 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-flx6z"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808636 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808645 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808655 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808664 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808674 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808683 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808695 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-b48tj"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808705 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808714 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-h4w78"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808724 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808737 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808747 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-sftp2"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808757 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-fgq7r"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808767 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808776 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808787 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7cl88"] Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.808889 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.812195 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.851868 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.871374 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.876987 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:45 crc kubenswrapper[5131]: E0107 09:51:45.877098 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:46.377076838 +0000 UTC m=+134.543378402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878302 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdvkl\" (UniqueName: \"kubernetes.io/projected/3444e2f9-d027-4e5d-b655-d564292fb959-kube-api-access-wdvkl\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878362 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1697c475-b030-40da-9ed0-7884931c55fd-tmp\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878439 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c39b15df-a1bc-4922-9712-8fba72c00fdf-config\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878481 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92c0b6a3-aea1-4854-9278-710a315edd4f-audit-policies\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878525 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0bbf073e-c62d-4074-a057-00541ac18caa-webhook-certs\") pod \"multus-admission-controller-69db94689b-gqz76\" (UID: \"0bbf073e-c62d-4074-a057-00541ac18caa\") " pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878555 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0fe1be72-61f5-4433-908d-225206c4c7a1-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-4ktsk\" (UID: \"0fe1be72-61f5-4433-908d-225206c4c7a1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878597 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20b2fa4c-8df5-43ac-a56a-397cb97e918d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878619 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/92c0b6a3-aea1-4854-9278-710a315edd4f-etcd-serving-ca\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878644 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-serving-cert\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878688 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djvht\" (UniqueName: \"kubernetes.io/projected/683287b8-61e8-4fb7-b688-586df63f560e-kube-api-access-djvht\") pod \"dns-operator-799b87ffcd-mh9sm\" (UID: \"683287b8-61e8-4fb7-b688-586df63f560e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878721 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-config\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878760 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3444e2f9-d027-4e5d-b655-d564292fb959-profile-collector-cert\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878790 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878811 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20b2fa4c-8df5-43ac-a56a-397cb97e918d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878865 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jkr9d\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-kube-api-access-jkr9d\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878885 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-tmp\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878952 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xh68\" (UniqueName: \"kubernetes.io/projected/cb015a21-e0c4-4c90-a563-ec8010ee6bd2-kube-api-access-8xh68\") pod \"kube-storage-version-migrator-operator-565b79b866-rwmlc\" (UID: \"cb015a21-e0c4-4c90-a563-ec8010ee6bd2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.878981 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-trusted-ca\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879026 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/20b2fa4c-8df5-43ac-a56a-397cb97e918d-images\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879051 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw6cr\" (UniqueName: \"kubernetes.io/projected/20b2fa4c-8df5-43ac-a56a-397cb97e918d-kube-api-access-cw6cr\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879096 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-webhook-cert\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879116 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgd4t\" (UniqueName: \"kubernetes.io/projected/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-kube-api-access-wgd4t\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879136 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879183 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-apiservice-cert\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879201 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/92c0b6a3-aea1-4854-9278-710a315edd4f-etcd-client\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879223 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3444e2f9-d027-4e5d-b655-d564292fb959-srv-cert\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879263 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/683287b8-61e8-4fb7-b688-586df63f560e-metrics-tls\") pod \"dns-operator-799b87ffcd-mh9sm\" (UID: \"683287b8-61e8-4fb7-b688-586df63f560e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879301 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-tls\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879342 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb015a21-e0c4-4c90-a563-ec8010ee6bd2-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-rwmlc\" (UID: \"cb015a21-e0c4-4c90-a563-ec8010ee6bd2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879368 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb015a21-e0c4-4c90-a563-ec8010ee6bd2-config\") pod \"kube-storage-version-migrator-operator-565b79b866-rwmlc\" (UID: \"cb015a21-e0c4-4c90-a563-ec8010ee6bd2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879390 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-tmp\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879412 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btwdj\" (UniqueName: \"kubernetes.io/projected/92c0b6a3-aea1-4854-9278-710a315edd4f-kube-api-access-btwdj\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879433 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea011b17-d07a-47da-9c01-d2a384306bcd-tmpfs\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879453 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/683287b8-61e8-4fb7-b688-586df63f560e-tmp-dir\") pod \"dns-operator-799b87ffcd-mh9sm\" (UID: \"683287b8-61e8-4fb7-b688-586df63f560e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879513 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-bound-sa-token\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879597 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c39b15df-a1bc-4922-9712-8fba72c00fdf-tmp-dir\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879626 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg5jj\" (UniqueName: \"kubernetes.io/projected/3aec8df7-9c9b-4f00-9a8d-ab05bbecb4d4-kube-api-access-xg5jj\") pod \"migrator-866fcbc849-wm88r\" (UID: \"3aec8df7-9c9b-4f00-9a8d-ab05bbecb4d4\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879668 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c39b15df-a1bc-4922-9712-8fba72c00fdf-kube-api-access\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879691 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92c0b6a3-aea1-4854-9278-710a315edd4f-audit-dir\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879709 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea011b17-d07a-47da-9c01-d2a384306bcd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879793 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-certificates\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879855 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c0b6a3-aea1-4854-9278-710a315edd4f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879896 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e92757e-cc25-48a6-a774-5c2a8a281576-ca-trust-extracted\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879941 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-client-ca\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.879990 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92c0b6a3-aea1-4854-9278-710a315edd4f-serving-cert\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.880289 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.880437 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c39b15df-a1bc-4922-9712-8fba72c00fdf-serving-cert\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.880507 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/92c0b6a3-aea1-4854-9278-710a315edd4f-encryption-config\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.880547 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.880453 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-trusted-ca\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.880671 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: E0107 09:51:45.881304 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:46.381282346 +0000 UTC m=+134.547583940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.881415 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea011b17-d07a-47da-9c01-d2a384306bcd-srv-cert\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.881427 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-client-ca\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.881462 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e92757e-cc25-48a6-a774-5c2a8a281576-ca-trust-extracted\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.881453 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e92757e-cc25-48a6-a774-5c2a8a281576-installation-pull-secrets\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.881530 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sv8dz\" (UniqueName: \"kubernetes.io/projected/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-kube-api-access-sv8dz\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.881568 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3444e2f9-d027-4e5d-b655-d564292fb959-tmpfs\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.881691 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bhb7r\" (UniqueName: \"kubernetes.io/projected/0fe1be72-61f5-4433-908d-225206c4c7a1-kube-api-access-bhb7r\") pod \"cluster-samples-operator-6b564684c8-4ktsk\" (UID: \"0fe1be72-61f5-4433-908d-225206c4c7a1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.881731 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6x2b\" (UniqueName: \"kubernetes.io/projected/0bbf073e-c62d-4074-a057-00541ac18caa-kube-api-access-b6x2b\") pod \"multus-admission-controller-69db94689b-gqz76\" (UID: \"0bbf073e-c62d-4074-a057-00541ac18caa\") " pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.881899 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-tmpfs\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.881927 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7d27\" (UniqueName: \"kubernetes.io/projected/ea011b17-d07a-47da-9c01-d2a384306bcd-kube-api-access-p7d27\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.881980 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2qkb\" (UniqueName: \"kubernetes.io/projected/1697c475-b030-40da-9ed0-7884931c55fd-kube-api-access-h2qkb\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.882451 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-certificates\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.882762 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-config\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.882968 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-serving-cert\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.885438 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e92757e-cc25-48a6-a774-5c2a8a281576-installation-pull-secrets\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.887040 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0fe1be72-61f5-4433-908d-225206c4c7a1-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-4ktsk\" (UID: \"0fe1be72-61f5-4433-908d-225206c4c7a1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.888771 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-tls\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.891986 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.911742 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.932366 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.951592 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.971825 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982459 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:45 crc kubenswrapper[5131]: E0107 09:51:45.982620 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:46.482598274 +0000 UTC m=+134.648899838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982710 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c39b15df-a1bc-4922-9712-8fba72c00fdf-tmp-dir\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982741 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xg5jj\" (UniqueName: \"kubernetes.io/projected/3aec8df7-9c9b-4f00-9a8d-ab05bbecb4d4-kube-api-access-xg5jj\") pod \"migrator-866fcbc849-wm88r\" (UID: \"3aec8df7-9c9b-4f00-9a8d-ab05bbecb4d4\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982767 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c39b15df-a1bc-4922-9712-8fba72c00fdf-kube-api-access\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982791 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92c0b6a3-aea1-4854-9278-710a315edd4f-audit-dir\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982815 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea011b17-d07a-47da-9c01-d2a384306bcd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982864 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c0b6a3-aea1-4854-9278-710a315edd4f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982892 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92c0b6a3-aea1-4854-9278-710a315edd4f-serving-cert\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982916 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c39b15df-a1bc-4922-9712-8fba72c00fdf-serving-cert\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982937 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/92c0b6a3-aea1-4854-9278-710a315edd4f-encryption-config\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982959 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.982992 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983020 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea011b17-d07a-47da-9c01-d2a384306bcd-srv-cert\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983046 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3444e2f9-d027-4e5d-b655-d564292fb959-tmpfs\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983079 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b6x2b\" (UniqueName: \"kubernetes.io/projected/0bbf073e-c62d-4074-a057-00541ac18caa-kube-api-access-b6x2b\") pod \"multus-admission-controller-69db94689b-gqz76\" (UID: \"0bbf073e-c62d-4074-a057-00541ac18caa\") " pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983113 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-tmpfs\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983137 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p7d27\" (UniqueName: \"kubernetes.io/projected/ea011b17-d07a-47da-9c01-d2a384306bcd-kube-api-access-p7d27\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983161 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h2qkb\" (UniqueName: \"kubernetes.io/projected/1697c475-b030-40da-9ed0-7884931c55fd-kube-api-access-h2qkb\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983187 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wdvkl\" (UniqueName: \"kubernetes.io/projected/3444e2f9-d027-4e5d-b655-d564292fb959-kube-api-access-wdvkl\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983211 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1697c475-b030-40da-9ed0-7884931c55fd-tmp\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983241 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c39b15df-a1bc-4922-9712-8fba72c00fdf-config\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983266 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92c0b6a3-aea1-4854-9278-710a315edd4f-audit-policies\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983289 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0bbf073e-c62d-4074-a057-00541ac18caa-webhook-certs\") pod \"multus-admission-controller-69db94689b-gqz76\" (UID: \"0bbf073e-c62d-4074-a057-00541ac18caa\") " pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983319 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20b2fa4c-8df5-43ac-a56a-397cb97e918d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983341 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c39b15df-a1bc-4922-9712-8fba72c00fdf-tmp-dir\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983341 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/92c0b6a3-aea1-4854-9278-710a315edd4f-etcd-serving-ca\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983391 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-djvht\" (UniqueName: \"kubernetes.io/projected/683287b8-61e8-4fb7-b688-586df63f560e-kube-api-access-djvht\") pod \"dns-operator-799b87ffcd-mh9sm\" (UID: \"683287b8-61e8-4fb7-b688-586df63f560e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983443 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3444e2f9-d027-4e5d-b655-d564292fb959-profile-collector-cert\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983498 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20b2fa4c-8df5-43ac-a56a-397cb97e918d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983540 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xh68\" (UniqueName: \"kubernetes.io/projected/cb015a21-e0c4-4c90-a563-ec8010ee6bd2-kube-api-access-8xh68\") pod \"kube-storage-version-migrator-operator-565b79b866-rwmlc\" (UID: \"cb015a21-e0c4-4c90-a563-ec8010ee6bd2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983572 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/20b2fa4c-8df5-43ac-a56a-397cb97e918d-images\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983592 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cw6cr\" (UniqueName: \"kubernetes.io/projected/20b2fa4c-8df5-43ac-a56a-397cb97e918d-kube-api-access-cw6cr\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983611 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-webhook-cert\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983629 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wgd4t\" (UniqueName: \"kubernetes.io/projected/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-kube-api-access-wgd4t\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983646 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983671 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-apiservice-cert\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983688 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/92c0b6a3-aea1-4854-9278-710a315edd4f-etcd-client\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983708 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3444e2f9-d027-4e5d-b655-d564292fb959-srv-cert\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983729 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/683287b8-61e8-4fb7-b688-586df63f560e-metrics-tls\") pod \"dns-operator-799b87ffcd-mh9sm\" (UID: \"683287b8-61e8-4fb7-b688-586df63f560e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983753 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb015a21-e0c4-4c90-a563-ec8010ee6bd2-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-rwmlc\" (UID: \"cb015a21-e0c4-4c90-a563-ec8010ee6bd2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983774 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb015a21-e0c4-4c90-a563-ec8010ee6bd2-config\") pod \"kube-storage-version-migrator-operator-565b79b866-rwmlc\" (UID: \"cb015a21-e0c4-4c90-a563-ec8010ee6bd2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983792 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-btwdj\" (UniqueName: \"kubernetes.io/projected/92c0b6a3-aea1-4854-9278-710a315edd4f-kube-api-access-btwdj\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983810 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea011b17-d07a-47da-9c01-d2a384306bcd-tmpfs\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.983828 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/683287b8-61e8-4fb7-b688-586df63f560e-tmp-dir\") pod \"dns-operator-799b87ffcd-mh9sm\" (UID: \"683287b8-61e8-4fb7-b688-586df63f560e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.984206 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/683287b8-61e8-4fb7-b688-586df63f560e-tmp-dir\") pod \"dns-operator-799b87ffcd-mh9sm\" (UID: \"683287b8-61e8-4fb7-b688-586df63f560e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.984680 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3444e2f9-d027-4e5d-b655-d564292fb959-tmpfs\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.985193 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92c0b6a3-aea1-4854-9278-710a315edd4f-audit-dir\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.985243 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-tmpfs\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.985249 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1697c475-b030-40da-9ed0-7884931c55fd-tmp\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:45 crc kubenswrapper[5131]: E0107 09:51:45.985399 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:46.485383288 +0000 UTC m=+134.651684942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.985599 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20b2fa4c-8df5-43ac-a56a-397cb97e918d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.985960 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea011b17-d07a-47da-9c01-d2a384306bcd-tmpfs\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.986333 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb015a21-e0c4-4c90-a563-ec8010ee6bd2-config\") pod \"kube-storage-version-migrator-operator-565b79b866-rwmlc\" (UID: \"cb015a21-e0c4-4c90-a563-ec8010ee6bd2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.990875 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb015a21-e0c4-4c90-a563-ec8010ee6bd2-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-rwmlc\" (UID: \"cb015a21-e0c4-4c90-a563-ec8010ee6bd2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.991524 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/683287b8-61e8-4fb7-b688-586df63f560e-metrics-tls\") pod \"dns-operator-799b87ffcd-mh9sm\" (UID: \"683287b8-61e8-4fb7-b688-586df63f560e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" Jan 07 09:51:45 crc kubenswrapper[5131]: I0107 09:51:45.991635 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.012092 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.032786 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.052352 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.056781 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c39b15df-a1bc-4922-9712-8fba72c00fdf-config\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.072886 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.077172 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c39b15df-a1bc-4922-9712-8fba72c00fdf-serving-cert\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.085346 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.086031 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:46.585988254 +0000 UTC m=+134.752289868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.092307 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.112791 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.133219 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.139536 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0bbf073e-c62d-4074-a057-00541ac18caa-webhook-certs\") pod \"multus-admission-controller-69db94689b-gqz76\" (UID: \"0bbf073e-c62d-4074-a057-00541ac18caa\") " pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.152583 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.159618 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/92c0b6a3-aea1-4854-9278-710a315edd4f-etcd-client\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.173036 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.174561 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c0b6a3-aea1-4854-9278-710a315edd4f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.187954 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.188488 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:46.688455773 +0000 UTC m=+134.854757377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.192696 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.200111 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92c0b6a3-aea1-4854-9278-710a315edd4f-serving-cert\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.213252 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.232239 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.242697 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/92c0b6a3-aea1-4854-9278-710a315edd4f-encryption-config\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.253186 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.256461 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92c0b6a3-aea1-4854-9278-710a315edd4f-audit-policies\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.273183 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.289647 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.290214 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:46.790190499 +0000 UTC m=+134.956492093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.292280 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.313299 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.315166 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/92c0b6a3-aea1-4854-9278-710a315edd4f-etcd-serving-ca\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.333199 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.339517 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea011b17-d07a-47da-9c01-d2a384306bcd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.340253 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3444e2f9-d027-4e5d-b655-d564292fb959-profile-collector-cert\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.353575 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.373626 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.380383 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea011b17-d07a-47da-9c01-d2a384306bcd-srv-cert\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.392239 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.392945 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:46.89291249 +0000 UTC m=+135.059214104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.393998 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.412817 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.436031 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.452550 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.472683 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.493069 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.493240 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.493483 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:46.993451173 +0000 UTC m=+135.159752787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.494309 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.494670 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:46.994652407 +0000 UTC m=+135.160954001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.513003 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.515928 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/20b2fa4c-8df5-43ac-a56a-397cb97e918d-images\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.532714 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.552893 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.559702 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20b2fa4c-8df5-43ac-a56a-397cb97e918d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.572716 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.580539 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3444e2f9-d027-4e5d-b655-d564292fb959-srv-cert\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.593082 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.595051 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.595264 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:47.095232611 +0000 UTC m=+135.261534215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.595661 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.596184 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:47.096150152 +0000 UTC m=+135.262451756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.599639 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.613220 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.633404 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.663586 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.667448 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.672337 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.692355 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.696385 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.696619 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:47.19657949 +0000 UTC m=+135.362881084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.697326 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.697980 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:47.197961752 +0000 UTC m=+135.364263356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.700048 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-apiservice-cert\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.702260 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-webhook-cert\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.713169 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.732376 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.752735 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.770556 5131 request.go:752] "Waited before sending request" delay="1.009933682s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/serviceaccounts/console-operator/token" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.793905 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-898c6\" (UniqueName: \"kubernetes.io/projected/b56ad19c-62aa-42e5-bdce-bd890317e4da-kube-api-access-898c6\") pod \"console-operator-67c89758df-flx6z\" (UID: \"b56ad19c-62aa-42e5-bdce-bd890317e4da\") " pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.798143 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.798756 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:47.298734185 +0000 UTC m=+135.465035789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.813822 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.820027 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqzzf\" (UniqueName: \"kubernetes.io/projected/0ef3c64b-1b50-43d7-888b-fa1d6dcf0282-kube-api-access-nqzzf\") pod \"authentication-operator-7f5c659b84-g5kcd\" (UID: \"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.830879 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.833366 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.853125 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.871464 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.872986 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.896185 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.899531 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:46 crc kubenswrapper[5131]: E0107 09:51:46.899973 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:47.399956679 +0000 UTC m=+135.566258253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.944206 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvsh5\" (UniqueName: \"kubernetes.io/projected/5116ec44-994d-4f27-872b-09ada0a94b73-kube-api-access-zvsh5\") pod \"openshift-apiserver-operator-846cbfc458-vm2vl\" (UID: \"5116ec44-994d-4f27-872b-09ada0a94b73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.965587 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gj97\" (UniqueName: \"kubernetes.io/projected/6497dc94-29dd-4d24-8a87-6721b752e8d3-kube-api-access-5gj97\") pod \"route-controller-manager-776cdc94d6-nhgrp\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:46 crc kubenswrapper[5131]: I0107 09:51:46.976052 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfb67\" (UniqueName: \"kubernetes.io/projected/4f9f2345-5823-4288-ad4b-e49b1088cba4-kube-api-access-qfb67\") pod \"apiserver-9ddfb9f55-k5x25\" (UID: \"4f9f2345-5823-4288-ad4b-e49b1088cba4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.000683 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.001427 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:47.501411773 +0000 UTC m=+135.667713337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.016574 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-df9wz\" (UniqueName: \"kubernetes.io/projected/1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0-kube-api-access-df9wz\") pod \"console-64d44f6ddf-xvbzj\" (UID: \"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0\") " pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.027467 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5px8d\" (UniqueName: \"kubernetes.io/projected/30454542-4fc6-4b3b-8917-13b0898bdc75-kube-api-access-5px8d\") pod \"machine-approver-54c688565-vjhjw\" (UID: \"30454542-4fc6-4b3b-8917-13b0898bdc75\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.052553 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fptn\" (UniqueName: \"kubernetes.io/projected/3c82fced-e466-4e52-8d61-b62e172d3ea9-kube-api-access-5fptn\") pod \"machine-api-operator-755bb95488-cw4c4\" (UID: \"3c82fced-e466-4e52-8d61-b62e172d3ea9\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.052791 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.070278 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd"] Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.073499 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 07 09:51:47 crc kubenswrapper[5131]: W0107 09:51:47.081353 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ef3c64b_1b50_43d7_888b_fa1d6dcf0282.slice/crio-52be4cea2afcc627b1049ea74fa03029531abc95015b54e080e6460bc5213273 WatchSource:0}: Error finding container 52be4cea2afcc627b1049ea74fa03029531abc95015b54e080e6460bc5213273: Status 404 returned error can't find the container with id 52be4cea2afcc627b1049ea74fa03029531abc95015b54e080e6460bc5213273 Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.087408 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.092907 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.093276 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-flx6z"] Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.098242 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.102784 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.103233 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:47.603209982 +0000 UTC m=+135.769511566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: W0107 09:51:47.103587 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb56ad19c_62aa_42e5_bdce_bd890317e4da.slice/crio-da39e7c779cb397402a65b24032a2669a1e49fdf19550455668cfee625708b7c WatchSource:0}: Error finding container da39e7c779cb397402a65b24032a2669a1e49fdf19550455668cfee625708b7c: Status 404 returned error can't find the container with id da39e7c779cb397402a65b24032a2669a1e49fdf19550455668cfee625708b7c Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.112989 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.132285 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.149707 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.155046 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.165149 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.173501 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.176892 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.185150 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.192509 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.203482 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.203795 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:47.703765036 +0000 UTC m=+135.870066600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.217449 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 07 09:51:47 crc kubenswrapper[5131]: W0107 09:51:47.217925 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30454542_4fc6_4b3b_8917_13b0898bdc75.slice/crio-ee554726c535cf2472e77d7396d9b10409c0aa451d3a5916c68061d90ffdc19e WatchSource:0}: Error finding container ee554726c535cf2472e77d7396d9b10409c0aa451d3a5916c68061d90ffdc19e: Status 404 returned error can't find the container with id ee554726c535cf2472e77d7396d9b10409c0aa451d3a5916c68061d90ffdc19e Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.233083 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.252417 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.275139 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.292909 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.305022 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.305410 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:47.805397576 +0000 UTC m=+135.971699140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.318662 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-k5x25"] Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.320147 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 07 09:51:47 crc kubenswrapper[5131]: W0107 09:51:47.325250 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f9f2345_5823_4288_ad4b_e49b1088cba4.slice/crio-7ae7f439fe47d1e751e7dceefd9441a5c6bdb45d4f511190171fb360b37f53ef WatchSource:0}: Error finding container 7ae7f439fe47d1e751e7dceefd9441a5c6bdb45d4f511190171fb360b37f53ef: Status 404 returned error can't find the container with id 7ae7f439fe47d1e751e7dceefd9441a5c6bdb45d4f511190171fb360b37f53ef Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.333234 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.349614 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-cw4c4"] Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.362152 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.373548 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.391822 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.404166 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl"] Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.405723 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.405993 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:47.905977671 +0000 UTC m=+136.072279235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.412337 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 07 09:51:47 crc kubenswrapper[5131]: W0107 09:51:47.412735 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5116ec44_994d_4f27_872b_09ada0a94b73.slice/crio-191da6195636e0eb995e2c703ed4ab843325af1d6fb4022dc87267cb1fcc212f WatchSource:0}: Error finding container 191da6195636e0eb995e2c703ed4ab843325af1d6fb4022dc87267cb1fcc212f: Status 404 returned error can't find the container with id 191da6195636e0eb995e2c703ed4ab843325af1d6fb4022dc87267cb1fcc212f Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.432914 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.452407 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.472292 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.494867 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.506925 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.507508 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.007491628 +0000 UTC m=+136.173793192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.511898 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.533042 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.552080 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.571744 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.591819 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.608509 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.608715 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.10868495 +0000 UTC m=+136.274986564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.608825 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.609478 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.109462705 +0000 UTC m=+136.275764329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.611860 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.633096 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp"] Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.633670 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.636187 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-xvbzj"] Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.652495 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 07 09:51:47 crc kubenswrapper[5131]: W0107 09:51:47.659172 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d8f71c1_e1fc_4770_ad03_7a1d4d244ce0.slice/crio-1434a8fe227b4b6bda7865800150ef0060335a40847a0e6212e85d481a9cce65 WatchSource:0}: Error finding container 1434a8fe227b4b6bda7865800150ef0060335a40847a0e6212e85d481a9cce65: Status 404 returned error can't find the container with id 1434a8fe227b4b6bda7865800150ef0060335a40847a0e6212e85d481a9cce65 Jan 07 09:51:47 crc kubenswrapper[5131]: W0107 09:51:47.660078 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6497dc94_29dd_4d24_8a87_6721b752e8d3.slice/crio-86300854854aae4982657befbafbb976d01490cb0157920835be7edfe0b908c1 WatchSource:0}: Error finding container 86300854854aae4982657befbafbb976d01490cb0157920835be7edfe0b908c1: Status 404 returned error can't find the container with id 86300854854aae4982657befbafbb976d01490cb0157920835be7edfe0b908c1 Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.672154 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.692436 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.709820 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.709947 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.209918764 +0000 UTC m=+136.376220348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.710300 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.710608 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.210595884 +0000 UTC m=+136.376897438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.712167 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.732993 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.752566 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.778435 5131 request.go:752] "Waited before sending request" delay="1.979006627s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.782002 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.792319 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.811156 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.811311 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.311287704 +0000 UTC m=+136.477589268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.811920 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.812207 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.312168483 +0000 UTC m=+136.478470047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.813826 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.822535 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" event={"ID":"5116ec44-994d-4f27-872b-09ada0a94b73","Type":"ContainerStarted","Data":"7c5b7bc7cd3853ea0e805e39b5b92e68f8047b64cad80556f804e13af67e71ad"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.822579 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" event={"ID":"5116ec44-994d-4f27-872b-09ada0a94b73","Type":"ContainerStarted","Data":"191da6195636e0eb995e2c703ed4ab843325af1d6fb4022dc87267cb1fcc212f"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.823940 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" event={"ID":"6497dc94-29dd-4d24-8a87-6721b752e8d3","Type":"ContainerStarted","Data":"ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.823985 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" event={"ID":"6497dc94-29dd-4d24-8a87-6721b752e8d3","Type":"ContainerStarted","Data":"86300854854aae4982657befbafbb976d01490cb0157920835be7edfe0b908c1"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.824129 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.825744 5131 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-nhgrp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.825786 5131 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" podUID="6497dc94-29dd-4d24-8a87-6721b752e8d3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.832293 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.833982 5131 generic.go:358] "Generic (PLEG): container finished" podID="4f9f2345-5823-4288-ad4b-e49b1088cba4" containerID="3c3a5171892573c89fa91e5cedce810a6b6f82caa151f985177dc6f66e354258" exitCode=0 Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.834090 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" event={"ID":"4f9f2345-5823-4288-ad4b-e49b1088cba4","Type":"ContainerDied","Data":"3c3a5171892573c89fa91e5cedce810a6b6f82caa151f985177dc6f66e354258"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.834121 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" event={"ID":"4f9f2345-5823-4288-ad4b-e49b1088cba4","Type":"ContainerStarted","Data":"7ae7f439fe47d1e751e7dceefd9441a5c6bdb45d4f511190171fb360b37f53ef"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.837251 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" event={"ID":"30454542-4fc6-4b3b-8917-13b0898bdc75","Type":"ContainerStarted","Data":"2b4f83486e94229dd8e87337ca0bc848006034d8dacced082fcc1badc951c4f7"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.837286 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" event={"ID":"30454542-4fc6-4b3b-8917-13b0898bdc75","Type":"ContainerStarted","Data":"472346f7501467c2b378e18ea1f437bb20a76a4a46e0da070fec8933e2f80cb3"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.837297 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" event={"ID":"30454542-4fc6-4b3b-8917-13b0898bdc75","Type":"ContainerStarted","Data":"ee554726c535cf2472e77d7396d9b10409c0aa451d3a5916c68061d90ffdc19e"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.838804 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" event={"ID":"3c82fced-e466-4e52-8d61-b62e172d3ea9","Type":"ContainerStarted","Data":"4bf57bbe3dfe3bdcf4764d00c82ffd851173cdcc5a860f49e397ad22dc1de6c2"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.838842 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" event={"ID":"3c82fced-e466-4e52-8d61-b62e172d3ea9","Type":"ContainerStarted","Data":"2cd9dade16d9ade754ad5b3feeefa524f721b2389908b60916d3c9aabf9dcfd2"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.838853 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" event={"ID":"3c82fced-e466-4e52-8d61-b62e172d3ea9","Type":"ContainerStarted","Data":"99767f30770c27d4a9bd5246d2510eee6aa71b12de8a79e1bab33210c97c7eed"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.840525 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" event={"ID":"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282","Type":"ContainerStarted","Data":"497cfe9c7fabaa4c9ecb883da6cd29dd0c81ca5b0c7b377c07054c8ae91c65ec"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.840551 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" event={"ID":"0ef3c64b-1b50-43d7-888b-fa1d6dcf0282","Type":"ContainerStarted","Data":"52be4cea2afcc627b1049ea74fa03029531abc95015b54e080e6460bc5213273"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.842986 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-xvbzj" event={"ID":"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0","Type":"ContainerStarted","Data":"acbf00d0758cac2e65d87c85a2ce21e565b0cde484d76181a31c921d0e9e9df0"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.843019 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-xvbzj" event={"ID":"1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0","Type":"ContainerStarted","Data":"1434a8fe227b4b6bda7865800150ef0060335a40847a0e6212e85d481a9cce65"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.845170 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-flx6z" event={"ID":"b56ad19c-62aa-42e5-bdce-bd890317e4da","Type":"ContainerStarted","Data":"f96fac4643222c53a9791d0f27a6693c9c58cffa421a8608c955e89faae93599"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.845191 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-flx6z" event={"ID":"b56ad19c-62aa-42e5-bdce-bd890317e4da","Type":"ContainerStarted","Data":"da39e7c779cb397402a65b24032a2669a1e49fdf19550455668cfee625708b7c"} Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.845517 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.852563 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.872559 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.892197 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.912401 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.912551 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.912575 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.412548219 +0000 UTC m=+136.578849793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.912810 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:47 crc kubenswrapper[5131]: E0107 09:51:47.913328 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.413311493 +0000 UTC m=+136.579613057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.975177 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkr9d\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-kube-api-access-jkr9d\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:47 crc kubenswrapper[5131]: I0107 09:51:47.988484 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-bound-sa-token\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.013720 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhb7r\" (UniqueName: \"kubernetes.io/projected/0fe1be72-61f5-4433-908d-225206c4c7a1-kube-api-access-bhb7r\") pod \"cluster-samples-operator-6b564684c8-4ktsk\" (UID: \"0fe1be72-61f5-4433-908d-225206c4c7a1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.014099 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.014601 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.514583019 +0000 UTC m=+136.680884583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.035004 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv8dz\" (UniqueName: \"kubernetes.io/projected/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-kube-api-access-sv8dz\") pod \"controller-manager-65b6cccf98-pssml\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.047065 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2qkb\" (UniqueName: \"kubernetes.io/projected/1697c475-b030-40da-9ed0-7884931c55fd-kube-api-access-h2qkb\") pod \"marketplace-operator-547dbd544d-mrfk7\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.067422 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xh68\" (UniqueName: \"kubernetes.io/projected/cb015a21-e0c4-4c90-a563-ec8010ee6bd2-kube-api-access-8xh68\") pod \"kube-storage-version-migrator-operator-565b79b866-rwmlc\" (UID: \"cb015a21-e0c4-4c90-a563-ec8010ee6bd2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.086128 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw6cr\" (UniqueName: \"kubernetes.io/projected/20b2fa4c-8df5-43ac-a56a-397cb97e918d-kube-api-access-cw6cr\") pod \"machine-config-operator-67c9d58cbb-q9xx7\" (UID: \"20b2fa4c-8df5-43ac-a56a-397cb97e918d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.106166 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.115640 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.116042 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.616026972 +0000 UTC m=+136.782328536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.129590 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c39b15df-a1bc-4922-9712-8fba72c00fdf-kube-api-access\") pod \"kube-apiserver-operator-575994946d-l2qqh\" (UID: \"c39b15df-a1bc-4922-9712-8fba72c00fdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.142362 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgd4t\" (UniqueName: \"kubernetes.io/projected/45f1adbe-9004-4ad3-b4e2-f8a0c6936502-kube-api-access-wgd4t\") pod \"packageserver-7d4fc7d867-psjk8\" (UID: \"45f1adbe-9004-4ad3-b4e2-f8a0c6936502\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.149475 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-flx6z" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.172156 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6x2b\" (UniqueName: \"kubernetes.io/projected/0bbf073e-c62d-4074-a057-00541ac18caa-kube-api-access-b6x2b\") pod \"multus-admission-controller-69db94689b-gqz76\" (UID: \"0bbf073e-c62d-4074-a057-00541ac18caa\") " pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.177677 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdvkl\" (UniqueName: \"kubernetes.io/projected/3444e2f9-d027-4e5d-b655-d564292fb959-kube-api-access-wdvkl\") pod \"olm-operator-5cdf44d969-vb59d\" (UID: \"3444e2f9-d027-4e5d-b655-d564292fb959\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.198151 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg5jj\" (UniqueName: \"kubernetes.io/projected/3aec8df7-9c9b-4f00-9a8d-ab05bbecb4d4-kube-api-access-xg5jj\") pod \"migrator-866fcbc849-wm88r\" (UID: \"3aec8df7-9c9b-4f00-9a8d-ab05bbecb4d4\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.208539 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7d27\" (UniqueName: \"kubernetes.io/projected/ea011b17-d07a-47da-9c01-d2a384306bcd-kube-api-access-p7d27\") pod \"catalog-operator-75ff9f647d-7dn5b\" (UID: \"ea011b17-d07a-47da-9c01-d2a384306bcd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.218990 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.220551 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.220731 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.72070378 +0000 UTC m=+136.887005344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.221121 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.221478 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.721471054 +0000 UTC m=+136.887772618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.239472 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.239611 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.253643 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.265258 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-btwdj\" (UniqueName: \"kubernetes.io/projected/92c0b6a3-aea1-4854-9278-710a315edd4f-kube-api-access-btwdj\") pod \"apiserver-8596bd845d-jk2mp\" (UID: \"92c0b6a3-aea1-4854-9278-710a315edd4f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.266898 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-djvht\" (UniqueName: \"kubernetes.io/projected/683287b8-61e8-4fb7-b688-586df63f560e-kube-api-access-djvht\") pod \"dns-operator-799b87ffcd-mh9sm\" (UID: \"683287b8-61e8-4fb7-b688-586df63f560e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.272968 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.282772 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.308046 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.318030 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323151 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323279 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8b23000d-6c61-4c26-9d45-4433be4c9408-tmp\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323335 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-tmp-dir\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323380 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e143601-07e2-425b-8478-f27f8045c536-config\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323396 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e143601-07e2-425b-8478-f27f8045c536-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323410 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f5047a5-cbaa-4193-a89d-901db9b002d8-secret-volume\") pod \"collect-profiles-29462985-nd9tw\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323424 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323441 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f5047a5-cbaa-4193-a89d-901db9b002d8-config-volume\") pod \"collect-profiles-29462985-nd9tw\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323466 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1586b0f3-181d-4c60-9dae-15afe62d18e3-signing-key\") pod \"service-ca-74545575db-fgq7r\" (UID: \"1586b0f3-181d-4c60-9dae-15afe62d18e3\") " pod="openshift-service-ca/service-ca-74545575db-fgq7r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323481 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/8b23000d-6c61-4c26-9d45-4433be4c9408-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323505 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpfbz\" (UniqueName: \"kubernetes.io/projected/6f5047a5-cbaa-4193-a89d-901db9b002d8-kube-api-access-cpfbz\") pod \"collect-profiles-29462985-nd9tw\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323560 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb879\" (UniqueName: \"kubernetes.io/projected/74766801-5e31-42d8-828f-ab317c8cc228-kube-api-access-qb879\") pod \"downloads-747b44746d-vdsqq\" (UID: \"74766801-5e31-42d8-828f-ab317c8cc228\") " pod="openshift-console/downloads-747b44746d-vdsqq" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323577 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8cr7\" (UniqueName: \"kubernetes.io/projected/1586b0f3-181d-4c60-9dae-15afe62d18e3-kube-api-access-r8cr7\") pod \"service-ca-74545575db-fgq7r\" (UID: \"1586b0f3-181d-4c60-9dae-15afe62d18e3\") " pod="openshift-service-ca/service-ca-74545575db-fgq7r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323604 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpcvp\" (UniqueName: \"kubernetes.io/projected/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-kube-api-access-qpcvp\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323621 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323638 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78325b9f-50a6-4dac-90a8-d28091bb5104-available-featuregates\") pod \"openshift-config-operator-5777786469-cnl99\" (UID: \"78325b9f-50a6-4dac-90a8-d28091bb5104\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323655 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323690 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-config\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323714 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llkr7\" (UniqueName: \"kubernetes.io/projected/18ffb9d1-d0b4-41bf-84ed-6d47984f831e-kube-api-access-llkr7\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2zfmr\" (UID: \"18ffb9d1-d0b4-41bf-84ed-6d47984f831e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323737 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.323777 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b23000d-6c61-4c26-9d45-4433be4c9408-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.323798 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.823781496 +0000 UTC m=+136.990083050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.324856 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b23000d-6c61-4c26-9d45-4433be4c9408-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.324881 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.324898 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-etcd-client\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.324913 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b23000d-6c61-4c26-9d45-4433be4c9408-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.324958 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-etcd-service-ca\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.325017 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.325055 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.325096 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/18ffb9d1-d0b4-41bf-84ed-6d47984f831e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2zfmr\" (UID: \"18ffb9d1-d0b4-41bf-84ed-6d47984f831e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.325185 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-serving-cert\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.325212 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksvkc\" (UniqueName: \"kubernetes.io/projected/78325b9f-50a6-4dac-90a8-d28091bb5104-kube-api-access-ksvkc\") pod \"openshift-config-operator-5777786469-cnl99\" (UID: \"78325b9f-50a6-4dac-90a8-d28091bb5104\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.325237 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1586b0f3-181d-4c60-9dae-15afe62d18e3-signing-cabundle\") pod \"service-ca-74545575db-fgq7r\" (UID: \"1586b0f3-181d-4c60-9dae-15afe62d18e3\") " pod="openshift-service-ca/service-ca-74545575db-fgq7r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.325257 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xpv6\" (UniqueName: \"kubernetes.io/projected/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-kube-api-access-2xpv6\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.325276 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-etcd-ca\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.325296 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6g9x\" (UniqueName: \"kubernetes.io/projected/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-kube-api-access-q6g9x\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.325328 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-config\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.325362 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjbwg\" (UniqueName: \"kubernetes.io/projected/8b23000d-6c61-4c26-9d45-4433be4c9408-kube-api-access-bjbwg\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.325777 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.825767575 +0000 UTC m=+136.992069129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.326159 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e143601-07e2-425b-8478-f27f8045c536-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.326193 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-config\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.326209 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78325b9f-50a6-4dac-90a8-d28091bb5104-serving-cert\") pod \"openshift-config-operator-5777786469-cnl99\" (UID: \"78325b9f-50a6-4dac-90a8-d28091bb5104\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.326277 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.326303 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.328155 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e143601-07e2-425b-8478-f27f8045c536-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.328854 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.329564 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.352357 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk"] Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.430885 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.432294 5131 ???:1] "http: TLS handshake error from 192.168.126.11:50942: no serving certificate available for the kubelet" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433132 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433278 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1586b0f3-181d-4c60-9dae-15afe62d18e3-signing-cabundle\") pod \"service-ca-74545575db-fgq7r\" (UID: \"1586b0f3-181d-4c60-9dae-15afe62d18e3\") " pod="openshift-service-ca/service-ca-74545575db-fgq7r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433302 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c61a2db1-fb94-4541-bc6a-57a2f0075072-ready\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433317 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-policies\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433368 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2xpv6\" (UniqueName: \"kubernetes.io/projected/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-kube-api-access-2xpv6\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433386 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-etcd-ca\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433412 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q6g9x\" (UniqueName: \"kubernetes.io/projected/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-kube-api-access-q6g9x\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433443 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-config\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433463 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433482 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bjbwg\" (UniqueName: \"kubernetes.io/projected/8b23000d-6c61-4c26-9d45-4433be4c9408-kube-api-access-bjbwg\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.433519 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.93350162 +0000 UTC m=+137.099803184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433546 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433608 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433629 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-registration-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433658 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e143601-07e2-425b-8478-f27f8045c536-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433675 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgzd5\" (UniqueName: \"kubernetes.io/projected/c45456da-7004-44dd-8bf8-f3bf8f0fa6f8-kube-api-access-tgzd5\") pod \"service-ca-operator-5b9c976747-df6lk\" (UID: \"c45456da-7004-44dd-8bf8-f3bf8f0fa6f8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433718 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-config\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433733 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78325b9f-50a6-4dac-90a8-d28091bb5104-serving-cert\") pod \"openshift-config-operator-5777786469-cnl99\" (UID: \"78325b9f-50a6-4dac-90a8-d28091bb5104\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433765 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c61a2db1-fb94-4541-bc6a-57a2f0075072-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433793 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433897 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c45456da-7004-44dd-8bf8-f3bf8f0fa6f8-config\") pod \"service-ca-operator-5b9c976747-df6lk\" (UID: \"c45456da-7004-44dd-8bf8-f3bf8f0fa6f8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433931 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433967 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.433989 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e143601-07e2-425b-8478-f27f8045c536-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.434006 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmllt\" (UniqueName: \"kubernetes.io/projected/2fe92145-224c-4f45-a28e-78caadd67d93-kube-api-access-rmllt\") pod \"ingress-canary-b48tj\" (UID: \"2fe92145-224c-4f45-a28e-78caadd67d93\") " pod="openshift-ingress-canary/ingress-canary-b48tj" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.434818 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71dececc-a0db-4099-9449-023def196d45-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-4nqqd\" (UID: \"71dececc-a0db-4099-9449-023def196d45\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.434859 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-dir\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.434893 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8b23000d-6c61-4c26-9d45-4433be4c9408-tmp\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.434947 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-plugins-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.434989 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-tmp-dir\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435026 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh476\" (UniqueName: \"kubernetes.io/projected/c892059c-f661-4684-9a1e-19e0b0070d24-kube-api-access-bh476\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435053 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25469bc4-e2e1-41c2-9b76-7f084b1feb46-service-ca-bundle\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435069 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71dececc-a0db-4099-9449-023def196d45-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-4nqqd\" (UID: \"71dececc-a0db-4099-9449-023def196d45\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435109 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btdcw\" (UniqueName: \"kubernetes.io/projected/38b19ba3-6ae3-4eef-9398-6ca8651cc5c1-kube-api-access-btdcw\") pod \"machine-config-server-h884s\" (UID: \"38b19ba3-6ae3-4eef-9398-6ca8651cc5c1\") " pod="openshift-machine-config-operator/machine-config-server-h884s" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435132 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e143601-07e2-425b-8478-f27f8045c536-config\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435149 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e143601-07e2-425b-8478-f27f8045c536-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435166 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f5047a5-cbaa-4193-a89d-901db9b002d8-secret-volume\") pod \"collect-profiles-29462985-nd9tw\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435184 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp676\" (UniqueName: \"kubernetes.io/projected/52bea4d2-c484-40f1-9e1a-635ce6bcfe62-kube-api-access-lp676\") pod \"package-server-manager-77f986bd66-2r6qr\" (UID: \"52bea4d2-c484-40f1-9e1a-635ce6bcfe62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435200 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435227 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435244 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f5047a5-cbaa-4193-a89d-901db9b002d8-config-volume\") pod \"collect-profiles-29462985-nd9tw\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435250 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-etcd-ca\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435263 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/25469bc4-e2e1-41c2-9b76-7f084b1feb46-stats-auth\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435287 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1586b0f3-181d-4c60-9dae-15afe62d18e3-signing-key\") pod \"service-ca-74545575db-fgq7r\" (UID: \"1586b0f3-181d-4c60-9dae-15afe62d18e3\") " pod="openshift-service-ca/service-ca-74545575db-fgq7r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435768 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.435934 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-config\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.436384 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8b23000d-6c61-4c26-9d45-4433be4c9408-tmp\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.436813 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-tmp-dir\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.436937 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e143601-07e2-425b-8478-f27f8045c536-config\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.437201 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/8b23000d-6c61-4c26-9d45-4433be4c9408-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.437243 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cpfbz\" (UniqueName: \"kubernetes.io/projected/6f5047a5-cbaa-4193-a89d-901db9b002d8-kube-api-access-cpfbz\") pod \"collect-profiles-29462985-nd9tw\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.437302 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de8ef978-428b-4c64-84a1-670939953bae-config-volume\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.437326 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qb879\" (UniqueName: \"kubernetes.io/projected/74766801-5e31-42d8-828f-ab317c8cc228-kube-api-access-qb879\") pod \"downloads-747b44746d-vdsqq\" (UID: \"74766801-5e31-42d8-828f-ab317c8cc228\") " pod="openshift-console/downloads-747b44746d-vdsqq" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.437523 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e143601-07e2-425b-8478-f27f8045c536-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.439217 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f5047a5-cbaa-4193-a89d-901db9b002d8-config-volume\") pod \"collect-profiles-29462985-nd9tw\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.439695 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-config\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.440033 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/8b23000d-6c61-4c26-9d45-4433be4c9408-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.440208 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1586b0f3-181d-4c60-9dae-15afe62d18e3-signing-cabundle\") pod \"service-ca-74545575db-fgq7r\" (UID: \"1586b0f3-181d-4c60-9dae-15afe62d18e3\") " pod="openshift-service-ca/service-ca-74545575db-fgq7r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.440291 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r8cr7\" (UniqueName: \"kubernetes.io/projected/1586b0f3-181d-4c60-9dae-15afe62d18e3-kube-api-access-r8cr7\") pod \"service-ca-74545575db-fgq7r\" (UID: \"1586b0f3-181d-4c60-9dae-15afe62d18e3\") " pod="openshift-service-ca/service-ca-74545575db-fgq7r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.440668 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qpcvp\" (UniqueName: \"kubernetes.io/projected/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-kube-api-access-qpcvp\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.440705 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldrw8\" (UniqueName: \"kubernetes.io/projected/c61a2db1-fb94-4541-bc6a-57a2f0075072-kube-api-access-ldrw8\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.440733 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.440799 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-mountpoint-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.440827 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.441422 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78325b9f-50a6-4dac-90a8-d28091bb5104-available-featuregates\") pod \"openshift-config-operator-5777786469-cnl99\" (UID: \"78325b9f-50a6-4dac-90a8-d28091bb5104\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.441452 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38b19ba3-6ae3-4eef-9398-6ca8651cc5c1-node-bootstrap-token\") pod \"machine-config-server-h884s\" (UID: \"38b19ba3-6ae3-4eef-9398-6ca8651cc5c1\") " pod="openshift-machine-config-operator/machine-config-server-h884s" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442071 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c61a2db1-fb94-4541-bc6a-57a2f0075072-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442130 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442336 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442487 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe92145-224c-4f45-a28e-78caadd67d93-cert\") pod \"ingress-canary-b48tj\" (UID: \"2fe92145-224c-4f45-a28e-78caadd67d93\") " pod="openshift-ingress-canary/ingress-canary-b48tj" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442527 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442609 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-config\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442634 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-llkr7\" (UniqueName: \"kubernetes.io/projected/18ffb9d1-d0b4-41bf-84ed-6d47984f831e-kube-api-access-llkr7\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2zfmr\" (UID: \"18ffb9d1-d0b4-41bf-84ed-6d47984f831e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442668 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442742 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b23000d-6c61-4c26-9d45-4433be4c9408-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442843 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b23000d-6c61-4c26-9d45-4433be4c9408-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442863 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442881 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/25469bc4-e2e1-41c2-9b76-7f084b1feb46-default-certificate\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442910 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-etcd-client\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442926 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b23000d-6c61-4c26-9d45-4433be4c9408-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442944 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gzj4\" (UniqueName: \"kubernetes.io/projected/71dececc-a0db-4099-9449-023def196d45-kube-api-access-4gzj4\") pod \"machine-config-controller-f9cdd68f7-4nqqd\" (UID: \"71dececc-a0db-4099-9449-023def196d45\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442963 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c45456da-7004-44dd-8bf8-f3bf8f0fa6f8-serving-cert\") pod \"service-ca-operator-5b9c976747-df6lk\" (UID: \"c45456da-7004-44dd-8bf8-f3bf8f0fa6f8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442979 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7pg4\" (UniqueName: \"kubernetes.io/projected/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-kube-api-access-q7pg4\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.442996 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-socket-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.443129 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/52bea4d2-c484-40f1-9e1a-635ce6bcfe62-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2r6qr\" (UID: \"52bea4d2-c484-40f1-9e1a-635ce6bcfe62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.443150 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.443209 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.443238 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-etcd-service-ca\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.443251 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78325b9f-50a6-4dac-90a8-d28091bb5104-available-featuregates\") pod \"openshift-config-operator-5777786469-cnl99\" (UID: \"78325b9f-50a6-4dac-90a8-d28091bb5104\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.443270 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/de8ef978-428b-4c64-84a1-670939953bae-tmp-dir\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.443291 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.443309 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pkll\" (UniqueName: \"kubernetes.io/projected/25469bc4-e2e1-41c2-9b76-7f084b1feb46-kube-api-access-9pkll\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.443893 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.444248 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78325b9f-50a6-4dac-90a8-d28091bb5104-serving-cert\") pod \"openshift-config-operator-5777786469-cnl99\" (UID: \"78325b9f-50a6-4dac-90a8-d28091bb5104\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.444778 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b23000d-6c61-4c26-9d45-4433be4c9408-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.445137 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-config\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.445560 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25469bc4-e2e1-41c2-9b76-7f084b1feb46-metrics-certs\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.445588 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38b19ba3-6ae3-4eef-9398-6ca8651cc5c1-certs\") pod \"machine-config-server-h884s\" (UID: \"38b19ba3-6ae3-4eef-9398-6ca8651cc5c1\") " pod="openshift-machine-config-operator/machine-config-server-h884s" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.445607 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.446003 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.446086 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/18ffb9d1-d0b4-41bf-84ed-6d47984f831e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2zfmr\" (UID: \"18ffb9d1-d0b4-41bf-84ed-6d47984f831e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.446217 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.446301 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:48.946290141 +0000 UTC m=+137.112591705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.446340 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de8ef978-428b-4c64-84a1-670939953bae-metrics-tls\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.446387 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-serving-cert\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.446439 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.446458 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-csi-data-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.446482 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ksvkc\" (UniqueName: \"kubernetes.io/projected/78325b9f-50a6-4dac-90a8-d28091bb5104-kube-api-access-ksvkc\") pod \"openshift-config-operator-5777786469-cnl99\" (UID: \"78325b9f-50a6-4dac-90a8-d28091bb5104\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.446500 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spb8r\" (UniqueName: \"kubernetes.io/projected/de8ef978-428b-4c64-84a1-670939953bae-kube-api-access-spb8r\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.448862 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-etcd-service-ca\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.450748 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b23000d-6c61-4c26-9d45-4433be4c9408-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.452607 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.453457 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1586b0f3-181d-4c60-9dae-15afe62d18e3-signing-key\") pod \"service-ca-74545575db-fgq7r\" (UID: \"1586b0f3-181d-4c60-9dae-15afe62d18e3\") " pod="openshift-service-ca/service-ca-74545575db-fgq7r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.456510 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e143601-07e2-425b-8478-f27f8045c536-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.457582 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f5047a5-cbaa-4193-a89d-901db9b002d8-secret-volume\") pod \"collect-profiles-29462985-nd9tw\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.464804 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.467251 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-serving-cert\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.468249 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-etcd-client\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.469923 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/18ffb9d1-d0b4-41bf-84ed-6d47984f831e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2zfmr\" (UID: \"18ffb9d1-d0b4-41bf-84ed-6d47984f831e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.476686 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjbwg\" (UniqueName: \"kubernetes.io/projected/8b23000d-6c61-4c26-9d45-4433be4c9408-kube-api-access-bjbwg\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.489436 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xpv6\" (UniqueName: \"kubernetes.io/projected/be5efe8d-ba1a-4bc1-b232-9eeff43c3277-kube-api-access-2xpv6\") pod \"etcd-operator-69b85846b6-4rj8b\" (UID: \"be5efe8d-ba1a-4bc1-b232-9eeff43c3277\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.506845 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6g9x\" (UniqueName: \"kubernetes.io/projected/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-kube-api-access-q6g9x\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.532923 5131 ???:1] "http: TLS handshake error from 192.168.126.11:50956: no serving certificate available for the kubelet" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.544380 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.547817 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548073 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e143601-07e2-425b-8478-f27f8045c536-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-hxnn7\" (UID: \"3e143601-07e2-425b-8478-f27f8045c536\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548097 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-plugins-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548138 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bh476\" (UniqueName: \"kubernetes.io/projected/c892059c-f661-4684-9a1e-19e0b0070d24-kube-api-access-bh476\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548164 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25469bc4-e2e1-41c2-9b76-7f084b1feb46-service-ca-bundle\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.548201 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.048182304 +0000 UTC m=+137.214483868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548229 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71dececc-a0db-4099-9449-023def196d45-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-4nqqd\" (UID: \"71dececc-a0db-4099-9449-023def196d45\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548282 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-btdcw\" (UniqueName: \"kubernetes.io/projected/38b19ba3-6ae3-4eef-9398-6ca8651cc5c1-kube-api-access-btdcw\") pod \"machine-config-server-h884s\" (UID: \"38b19ba3-6ae3-4eef-9398-6ca8651cc5c1\") " pod="openshift-machine-config-operator/machine-config-server-h884s" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548312 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lp676\" (UniqueName: \"kubernetes.io/projected/52bea4d2-c484-40f1-9e1a-635ce6bcfe62-kube-api-access-lp676\") pod \"package-server-manager-77f986bd66-2r6qr\" (UID: \"52bea4d2-c484-40f1-9e1a-635ce6bcfe62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548331 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548361 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/25469bc4-e2e1-41c2-9b76-7f084b1feb46-stats-auth\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548401 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de8ef978-428b-4c64-84a1-670939953bae-config-volume\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548429 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldrw8\" (UniqueName: \"kubernetes.io/projected/c61a2db1-fb94-4541-bc6a-57a2f0075072-kube-api-access-ldrw8\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548444 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548485 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-mountpoint-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548790 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-plugins-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.548977 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25469bc4-e2e1-41c2-9b76-7f084b1feb46-service-ca-bundle\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549072 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38b19ba3-6ae3-4eef-9398-6ca8651cc5c1-node-bootstrap-token\") pod \"machine-config-server-h884s\" (UID: \"38b19ba3-6ae3-4eef-9398-6ca8651cc5c1\") " pod="openshift-machine-config-operator/machine-config-server-h884s" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549112 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c61a2db1-fb94-4541-bc6a-57a2f0075072-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549138 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe92145-224c-4f45-a28e-78caadd67d93-cert\") pod \"ingress-canary-b48tj\" (UID: \"2fe92145-224c-4f45-a28e-78caadd67d93\") " pod="openshift-ingress-canary/ingress-canary-b48tj" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549149 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-mountpoint-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549154 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549634 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/25469bc4-e2e1-41c2-9b76-7f084b1feb46-default-certificate\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549655 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4gzj4\" (UniqueName: \"kubernetes.io/projected/71dececc-a0db-4099-9449-023def196d45-kube-api-access-4gzj4\") pod \"machine-config-controller-f9cdd68f7-4nqqd\" (UID: \"71dececc-a0db-4099-9449-023def196d45\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549672 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c45456da-7004-44dd-8bf8-f3bf8f0fa6f8-serving-cert\") pod \"service-ca-operator-5b9c976747-df6lk\" (UID: \"c45456da-7004-44dd-8bf8-f3bf8f0fa6f8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549687 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q7pg4\" (UniqueName: \"kubernetes.io/projected/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-kube-api-access-q7pg4\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549704 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-socket-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549723 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/52bea4d2-c484-40f1-9e1a-635ce6bcfe62-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2r6qr\" (UID: \"52bea4d2-c484-40f1-9e1a-635ce6bcfe62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549741 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549751 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c61a2db1-fb94-4541-bc6a-57a2f0075072-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549772 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549796 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/de8ef978-428b-4c64-84a1-670939953bae-tmp-dir\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549815 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9pkll\" (UniqueName: \"kubernetes.io/projected/25469bc4-e2e1-41c2-9b76-7f084b1feb46-kube-api-access-9pkll\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549867 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25469bc4-e2e1-41c2-9b76-7f084b1feb46-metrics-certs\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549891 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38b19ba3-6ae3-4eef-9398-6ca8651cc5c1-certs\") pod \"machine-config-server-h884s\" (UID: \"38b19ba3-6ae3-4eef-9398-6ca8651cc5c1\") " pod="openshift-machine-config-operator/machine-config-server-h884s" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549909 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549933 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549956 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de8ef978-428b-4c64-84a1-670939953bae-metrics-tls\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.549981 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550026 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-csi-data-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550044 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-spb8r\" (UniqueName: \"kubernetes.io/projected/de8ef978-428b-4c64-84a1-670939953bae-kube-api-access-spb8r\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550070 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c61a2db1-fb94-4541-bc6a-57a2f0075072-ready\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550085 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-policies\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550119 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550141 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550161 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550175 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-registration-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550197 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgzd5\" (UniqueName: \"kubernetes.io/projected/c45456da-7004-44dd-8bf8-f3bf8f0fa6f8-kube-api-access-tgzd5\") pod \"service-ca-operator-5b9c976747-df6lk\" (UID: \"c45456da-7004-44dd-8bf8-f3bf8f0fa6f8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550220 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c61a2db1-fb94-4541-bc6a-57a2f0075072-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550241 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550262 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c45456da-7004-44dd-8bf8-f3bf8f0fa6f8-config\") pod \"service-ca-operator-5b9c976747-df6lk\" (UID: \"c45456da-7004-44dd-8bf8-f3bf8f0fa6f8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550295 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmllt\" (UniqueName: \"kubernetes.io/projected/2fe92145-224c-4f45-a28e-78caadd67d93-kube-api-access-rmllt\") pod \"ingress-canary-b48tj\" (UID: \"2fe92145-224c-4f45-a28e-78caadd67d93\") " pod="openshift-ingress-canary/ingress-canary-b48tj" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550298 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550312 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71dececc-a0db-4099-9449-023def196d45-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-4nqqd\" (UID: \"71dececc-a0db-4099-9449-023def196d45\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550332 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-dir\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.550418 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-dir\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.551514 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/de8ef978-428b-4c64-84a1-670939953bae-tmp-dir\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.551683 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de8ef978-428b-4c64-84a1-670939953bae-config-volume\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.551956 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-socket-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.552162 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c61a2db1-fb94-4541-bc6a-57a2f0075072-ready\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.552180 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.552490 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.052476436 +0000 UTC m=+137.218778000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.552939 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-csi-data-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.553079 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c892059c-f661-4684-9a1e-19e0b0070d24-registration-dir\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.553425 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-policies\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.553692 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c45456da-7004-44dd-8bf8-f3bf8f0fa6f8-config\") pod \"service-ca-operator-5b9c976747-df6lk\" (UID: \"c45456da-7004-44dd-8bf8-f3bf8f0fa6f8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.553817 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71dececc-a0db-4099-9449-023def196d45-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-4nqqd\" (UID: \"71dececc-a0db-4099-9449-023def196d45\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.553820 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.553887 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c61a2db1-fb94-4541-bc6a-57a2f0075072-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.554298 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38b19ba3-6ae3-4eef-9398-6ca8651cc5c1-node-bootstrap-token\") pod \"machine-config-server-h884s\" (UID: \"38b19ba3-6ae3-4eef-9398-6ca8651cc5c1\") " pod="openshift-machine-config-operator/machine-config-server-h884s" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.554335 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.554571 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe92145-224c-4f45-a28e-78caadd67d93-cert\") pod \"ingress-canary-b48tj\" (UID: \"2fe92145-224c-4f45-a28e-78caadd67d93\") " pod="openshift-ingress-canary/ingress-canary-b48tj" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.557152 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71dececc-a0db-4099-9449-023def196d45-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-4nqqd\" (UID: \"71dececc-a0db-4099-9449-023def196d45\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.563547 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25469bc4-e2e1-41c2-9b76-7f084b1feb46-metrics-certs\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.566010 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.567369 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38b19ba3-6ae3-4eef-9398-6ca8651cc5c1-certs\") pod \"machine-config-server-h884s\" (UID: \"38b19ba3-6ae3-4eef-9398-6ca8651cc5c1\") " pod="openshift-machine-config-operator/machine-config-server-h884s" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.567458 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c45456da-7004-44dd-8bf8-f3bf8f0fa6f8-serving-cert\") pod \"service-ca-operator-5b9c976747-df6lk\" (UID: \"c45456da-7004-44dd-8bf8-f3bf8f0fa6f8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.567934 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de8ef978-428b-4c64-84a1-670939953bae-metrics-tls\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.571338 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.581460 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.582774 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/25469bc4-e2e1-41c2-9b76-7f084b1feb46-default-certificate\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.587167 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.587624 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/25469bc4-e2e1-41c2-9b76-7f084b1feb46-stats-auth\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.591115 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.593234 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.599476 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.611619 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb879\" (UniqueName: \"kubernetes.io/projected/74766801-5e31-42d8-828f-ab317c8cc228-kube-api-access-qb879\") pod \"downloads-747b44746d-vdsqq\" (UID: \"74766801-5e31-42d8-828f-ab317c8cc228\") " pod="openshift-console/downloads-747b44746d-vdsqq" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.612103 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpfbz\" (UniqueName: \"kubernetes.io/projected/6f5047a5-cbaa-4193-a89d-901db9b002d8-kube-api-access-cpfbz\") pod \"collect-profiles-29462985-nd9tw\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.613293 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/52bea4d2-c484-40f1-9e1a-635ce6bcfe62-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2r6qr\" (UID: \"52bea4d2-c484-40f1-9e1a-635ce6bcfe62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.620515 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8cr7\" (UniqueName: \"kubernetes.io/projected/1586b0f3-181d-4c60-9dae-15afe62d18e3-kube-api-access-r8cr7\") pod \"service-ca-74545575db-fgq7r\" (UID: \"1586b0f3-181d-4c60-9dae-15afe62d18e3\") " pod="openshift-service-ca/service-ca-74545575db-fgq7r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.631261 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-fgq7r" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.636981 5131 ???:1] "http: TLS handshake error from 192.168.126.11:50962: no serving certificate available for the kubelet" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.638061 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpcvp\" (UniqueName: \"kubernetes.io/projected/c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b-kube-api-access-qpcvp\") pod \"openshift-controller-manager-operator-686468bdd5-jz8vb\" (UID: \"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.650916 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.651567 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.151533133 +0000 UTC m=+137.317834717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.663461 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21eae8d8-8c33-4c90-b38d-d3fccae28e7d-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-hhbw5\" (UID: \"21eae8d8-8c33-4c90-b38d-d3fccae28e7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.713440 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-llkr7\" (UniqueName: \"kubernetes.io/projected/18ffb9d1-d0b4-41bf-84ed-6d47984f831e-kube-api-access-llkr7\") pod \"control-plane-machine-set-operator-75ffdb6fcd-2zfmr\" (UID: \"18ffb9d1-d0b4-41bf-84ed-6d47984f831e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.716966 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.721579 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b23000d-6c61-4c26-9d45-4433be4c9408-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-lbptl\" (UID: \"8b23000d-6c61-4c26-9d45-4433be4c9408\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.730899 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d5f65eb-0ec3-427f-9153-62bbc1651bc8-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-pxdmv\" (UID: \"0d5f65eb-0ec3-427f-9153-62bbc1651bc8\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.732055 5131 ???:1] "http: TLS handshake error from 192.168.126.11:50968: no serving certificate available for the kubelet" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.734097 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksvkc\" (UniqueName: \"kubernetes.io/projected/78325b9f-50a6-4dac-90a8-d28091bb5104-kube-api-access-ksvkc\") pod \"openshift-config-operator-5777786469-cnl99\" (UID: \"78325b9f-50a6-4dac-90a8-d28091bb5104\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.740250 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-vdsqq" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.746173 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.751240 5131 ???:1] "http: TLS handshake error from 192.168.126.11:50984: no serving certificate available for the kubelet" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.753576 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.754106 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.254089626 +0000 UTC m=+137.420391190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.763508 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.765500 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.773788 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh476\" (UniqueName: \"kubernetes.io/projected/c892059c-f661-4684-9a1e-19e0b0070d24-kube-api-access-bh476\") pod \"csi-hostpathplugin-7cl88\" (UID: \"c892059c-f661-4684-9a1e-19e0b0070d24\") " pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.789063 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-btdcw\" (UniqueName: \"kubernetes.io/projected/38b19ba3-6ae3-4eef-9398-6ca8651cc5c1-kube-api-access-btdcw\") pod \"machine-config-server-h884s\" (UID: \"38b19ba3-6ae3-4eef-9398-6ca8651cc5c1\") " pod="openshift-machine-config-operator/machine-config-server-h884s" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.803809 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.810366 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.813004 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp676\" (UniqueName: \"kubernetes.io/projected/52bea4d2-c484-40f1-9e1a-635ce6bcfe62-kube-api-access-lp676\") pod \"package-server-manager-77f986bd66-2r6qr\" (UID: \"52bea4d2-c484-40f1-9e1a-635ce6bcfe62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.837423 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldrw8\" (UniqueName: \"kubernetes.io/projected/c61a2db1-fb94-4541-bc6a-57a2f0075072-kube-api-access-ldrw8\") pod \"cni-sysctl-allowlist-ds-grvm4\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.842287 5131 ???:1] "http: TLS handshake error from 192.168.126.11:50986: no serving certificate available for the kubelet" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.855366 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.855774 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.355754659 +0000 UTC m=+137.522056223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.857222 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.859558 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7pg4\" (UniqueName: \"kubernetes.io/projected/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-kube-api-access-q7pg4\") pod \"oauth-openshift-66458b6674-sftp2\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.865792 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.875178 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gzj4\" (UniqueName: \"kubernetes.io/projected/71dececc-a0db-4099-9449-023def196d45-kube-api-access-4gzj4\") pod \"machine-config-controller-f9cdd68f7-4nqqd\" (UID: \"71dececc-a0db-4099-9449-023def196d45\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.907790 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pkll\" (UniqueName: \"kubernetes.io/projected/25469bc4-e2e1-41c2-9b76-7f084b1feb46-kube-api-access-9pkll\") pod \"router-default-68cf44c8b8-z4875\" (UID: \"25469bc4-e2e1-41c2-9b76-7f084b1feb46\") " pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.934371 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgzd5\" (UniqueName: \"kubernetes.io/projected/c45456da-7004-44dd-8bf8-f3bf8f0fa6f8-kube-api-access-tgzd5\") pod \"service-ca-operator-5b9c976747-df6lk\" (UID: \"c45456da-7004-44dd-8bf8-f3bf8f0fa6f8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.937904 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc"] Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.939316 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh"] Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.939810 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" event={"ID":"4f9f2345-5823-4288-ad4b-e49b1088cba4","Type":"ContainerStarted","Data":"088ee87b323d25a5535eb59218f289bbab4e1f0bbed2a3618f75735f3dd4d1a2"} Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.944522 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.945098 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmllt\" (UniqueName: \"kubernetes.io/projected/2fe92145-224c-4f45-a28e-78caadd67d93-kube-api-access-rmllt\") pod \"ingress-canary-b48tj\" (UID: \"2fe92145-224c-4f45-a28e-78caadd67d93\") " pod="openshift-ingress-canary/ingress-canary-b48tj" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.945399 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.957429 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:48 crc kubenswrapper[5131]: E0107 09:51:48.957768 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.457756778 +0000 UTC m=+137.624058342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.958133 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.960934 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-spb8r\" (UniqueName: \"kubernetes.io/projected/de8ef978-428b-4c64-84a1-670939953bae-kube-api-access-spb8r\") pod \"dns-default-h4w78\" (UID: \"de8ef978-428b-4c64-84a1-670939953bae\") " pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:48 crc kubenswrapper[5131]: I0107 09:51:48.962541 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" event={"ID":"0fe1be72-61f5-4433-908d-225206c4c7a1","Type":"ContainerStarted","Data":"f5507d3e35abf5fcfb123437d3b52f88d72e02a7799690297fbdbfdbadbf7271"} Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.000101 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-b48tj" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.000524 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.004877 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-gqz76"] Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.005733 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.011700 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h884s" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.011734 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.012180 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.019792 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7cl88" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.027586 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.035289 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.041082 5131 ???:1] "http: TLS handshake error from 192.168.126.11:50996: no serving certificate available for the kubelet" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.048558 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-cw4c4" podStartSLOduration=117.048541405 podStartE2EDuration="1m57.048541405s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:49.04664796 +0000 UTC m=+137.212949524" watchObservedRunningTime="2026-01-07 09:51:49.048541405 +0000 UTC m=+137.214842969" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.059576 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.061207 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.56118361 +0000 UTC m=+137.727485174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.080317 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vm2vl" podStartSLOduration=117.080302424 podStartE2EDuration="1m57.080302424s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:49.079443786 +0000 UTC m=+137.245745350" watchObservedRunningTime="2026-01-07 09:51:49.080302424 +0000 UTC m=+137.246603978" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.124553 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d"] Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.161065 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.161380 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.661367957 +0000 UTC m=+137.827669511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.167226 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" podStartSLOduration=116.167216228 podStartE2EDuration="1m56.167216228s" podCreationTimestamp="2026-01-07 09:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:49.166454034 +0000 UTC m=+137.332755608" watchObservedRunningTime="2026-01-07 09:51:49.167216228 +0000 UTC m=+137.333517792" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.202519 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-xvbzj" podStartSLOduration=117.202504885 podStartE2EDuration="1m57.202504885s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:49.202106777 +0000 UTC m=+137.368408331" watchObservedRunningTime="2026-01-07 09:51:49.202504885 +0000 UTC m=+137.368806449" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.240027 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-vjhjw" podStartSLOduration=117.240009711 podStartE2EDuration="1m57.240009711s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:49.239199465 +0000 UTC m=+137.405501029" watchObservedRunningTime="2026-01-07 09:51:49.240009711 +0000 UTC m=+137.406311285" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.262181 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.262389 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.76236242 +0000 UTC m=+137.928663984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.262685 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.263021 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.763013629 +0000 UTC m=+137.929315193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.363648 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:49 crc kubenswrapper[5131]: W0107 09:51:49.363737 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bbf073e_c62d_4074_a057_00541ac18caa.slice/crio-aad2bad4d23d1cbaf5ba6ae1f055cb26bb5beb6ccc73331b8f3a997e8ae7775a WatchSource:0}: Error finding container aad2bad4d23d1cbaf5ba6ae1f055cb26bb5beb6ccc73331b8f3a997e8ae7775a: Status 404 returned error can't find the container with id aad2bad4d23d1cbaf5ba6ae1f055cb26bb5beb6ccc73331b8f3a997e8ae7775a Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.363867 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.863845635 +0000 UTC m=+138.030147199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.396970 5131 ???:1] "http: TLS handshake error from 192.168.126.11:51004: no serving certificate available for the kubelet" Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.465470 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.465829 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:49.965813562 +0000 UTC m=+138.132115126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.566655 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.567217 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.067175361 +0000 UTC m=+138.233476925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.567530 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.568034 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.068015659 +0000 UTC m=+138.234317223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.669990 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.670487 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.170472248 +0000 UTC m=+138.336773812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.680118 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7"] Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.697919 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b"] Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.742801 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mrfk7"] Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.747715 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r"] Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.773086 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.773540 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.273523723 +0000 UTC m=+138.439825287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.833176 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pssml"] Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.874072 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.874543 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.374522206 +0000 UTC m=+138.540823770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: W0107 09:51:49.888287 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aec8df7_9c9b_4f00_9a8d_ab05bbecb4d4.slice/crio-717c2db8d802d38f49baf5977f544d43255c4834f4048aaf2e6251c048c9276b WatchSource:0}: Error finding container 717c2db8d802d38f49baf5977f544d43255c4834f4048aaf2e6251c048c9276b: Status 404 returned error can't find the container with id 717c2db8d802d38f49baf5977f544d43255c4834f4048aaf2e6251c048c9276b Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.975453 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:49 crc kubenswrapper[5131]: E0107 09:51:49.976901 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.476882601 +0000 UTC m=+138.643184165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:49 crc kubenswrapper[5131]: I0107 09:51:49.980155 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" event={"ID":"c61a2db1-fb94-4541-bc6a-57a2f0075072","Type":"ContainerStarted","Data":"96372a87f1a7a128c87f7a00e9497f17ed0a4d11f5a3b694206366a24622f9d0"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.020495 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" event={"ID":"0bbf073e-c62d-4074-a057-00541ac18caa","Type":"ContainerStarted","Data":"aad2bad4d23d1cbaf5ba6ae1f055cb26bb5beb6ccc73331b8f3a997e8ae7775a"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.021637 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h884s" event={"ID":"38b19ba3-6ae3-4eef-9398-6ca8651cc5c1","Type":"ContainerStarted","Data":"0fb6d429aec6a3a44dcbd3ca175365304c8890f4000a402d018a52f5e296f723"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.036992 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-z4875" event={"ID":"25469bc4-e2e1-41c2-9b76-7f084b1feb46","Type":"ContainerStarted","Data":"ae7384371dad91e63ed16b6ec8f34a6d7d3fb127c0c1430b927f14b7754e0e58"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.050076 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" event={"ID":"ea011b17-d07a-47da-9c01-d2a384306bcd","Type":"ContainerStarted","Data":"8849b33413e4b97e4e66486aeaadf93ea217aee8a5f593edfc0810320f6c09aa"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.054792 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" event={"ID":"20b2fa4c-8df5-43ac-a56a-397cb97e918d","Type":"ContainerStarted","Data":"280828a9e46efc11d8efb012cb3b3ca65ea4b402de1e8e2207cbafc7b5c169f6"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.057222 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" event={"ID":"0fe1be72-61f5-4433-908d-225206c4c7a1","Type":"ContainerStarted","Data":"d27c838050718f8d0540635e0657992d574b0da1be5ffaec45f4fd8b54ee264f"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.077261 5131 ???:1] "http: TLS handshake error from 192.168.126.11:51008: no serving certificate available for the kubelet" Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.077413 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.081388 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.5813627 +0000 UTC m=+138.747664264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.081502 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.082690 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.582680839 +0000 UTC m=+138.748982403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.093931 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" event={"ID":"cb015a21-e0c4-4c90-a563-ec8010ee6bd2","Type":"ContainerStarted","Data":"a8247f1f7ae6e21dfc12ea0db1c0312df60b827bb8bbced7511671da5f162444"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.114909 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-fgq7r"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.116145 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" event={"ID":"1697c475-b030-40da-9ed0-7884931c55fd","Type":"ContainerStarted","Data":"1655b270d631c55f0b08f813293cd9fc1c3d3f116e210eb4b789357e15ce5728"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.121504 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" event={"ID":"3444e2f9-d027-4e5d-b655-d564292fb959","Type":"ContainerStarted","Data":"39b5f2cf987a9c68bb929345c0046036c7216fa3b89550a6a7c3693a38b5944f"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.131111 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-g5kcd" podStartSLOduration=118.129435738 podStartE2EDuration="1m58.129435738s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:50.126229135 +0000 UTC m=+138.292530699" watchObservedRunningTime="2026-01-07 09:51:50.129435738 +0000 UTC m=+138.295737302" Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.157682 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" event={"ID":"4f9f2345-5823-4288-ad4b-e49b1088cba4","Type":"ContainerStarted","Data":"515d76624661327a815da98397c6f511984e8e63bac2b478f6b35c9a956b8ec1"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.167959 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" event={"ID":"c39b15df-a1bc-4922-9712-8fba72c00fdf","Type":"ContainerStarted","Data":"06491930e022619bfafc056e9132790eed7c31889b089cbba2381f901c3b8fc5"} Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.184099 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.184495 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.684457507 +0000 UTC m=+138.850759071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.205418 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.205453 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b"] Jan 07 09:51:50 crc kubenswrapper[5131]: W0107 09:51:50.208021 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1586b0f3_181d_4c60_9dae_15afe62d18e3.slice/crio-d60b2283d0df8a5b4b4f724ead27f63cdb1af51508a847973d2c4687951582fd WatchSource:0}: Error finding container d60b2283d0df8a5b4b4f724ead27f63cdb1af51508a847973d2c4687951582fd: Status 404 returned error can't find the container with id d60b2283d0df8a5b4b4f724ead27f63cdb1af51508a847973d2c4687951582fd Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.222938 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.227346 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.239293 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.242415 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-vdsqq"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.246985 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-mh9sm"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.286198 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.286785 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.786765639 +0000 UTC m=+138.953067203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.325112 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-b48tj"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.342261 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cnl99"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.366851 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.381458 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.391379 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.391679 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.891648456 +0000 UTC m=+139.057950020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.393143 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.393452 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.893435826 +0000 UTC m=+139.059737390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.401813 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.412409 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7cl88"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.426473 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-h4w78"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.428093 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.437388 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-sftp2"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.452674 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.453542 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-flx6z" podStartSLOduration=118.453525841 podStartE2EDuration="1m58.453525841s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:50.45327395 +0000 UTC m=+138.619575504" watchObservedRunningTime="2026-01-07 09:51:50.453525841 +0000 UTC m=+138.619827405" Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.463815 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.467248 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw"] Jan 07 09:51:50 crc kubenswrapper[5131]: W0107 09:51:50.485375 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21eae8d8_8c33_4c90_b38d_d3fccae28e7d.slice/crio-acdf881a0afad818b97b69a7a1cc9f2032e33a0f60a43359bb974f85d690679a WatchSource:0}: Error finding container acdf881a0afad818b97b69a7a1cc9f2032e33a0f60a43359bb974f85d690679a: Status 404 returned error can't find the container with id acdf881a0afad818b97b69a7a1cc9f2032e33a0f60a43359bb974f85d690679a Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.493893 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.494206 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.994180988 +0000 UTC m=+139.160482552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.494490 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.494774 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:50.994761854 +0000 UTC m=+139.161063418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.526389 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd"] Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.595806 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.596452 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.096406296 +0000 UTC m=+139.262707860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: W0107 09:51:50.696139 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71dececc_a0db_4099_9449_023def196d45.slice/crio-baa7e2e219137a852a5725dcfffbf669d5ec488e4a8edc175ee9b9e963e770a1 WatchSource:0}: Error finding container baa7e2e219137a852a5725dcfffbf669d5ec488e4a8edc175ee9b9e963e770a1: Status 404 returned error can't find the container with id baa7e2e219137a852a5725dcfffbf669d5ec488e4a8edc175ee9b9e963e770a1 Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.697865 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.698190 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.198178084 +0000 UTC m=+139.364479648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.724852 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" podStartSLOduration=118.724820075 podStartE2EDuration="1m58.724820075s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:50.724006378 +0000 UTC m=+138.890307952" watchObservedRunningTime="2026-01-07 09:51:50.724820075 +0000 UTC m=+138.891121639" Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.799320 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.799616 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.299599796 +0000 UTC m=+139.465901350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.799928 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.800194 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.300187303 +0000 UTC m=+139.466488867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.903594 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.904415 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.404393229 +0000 UTC m=+139.570694793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:50 crc kubenswrapper[5131]: I0107 09:51:50.906581 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:50 crc kubenswrapper[5131]: E0107 09:51:50.907004 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.406989985 +0000 UTC m=+139.573291549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.008629 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:51 crc kubenswrapper[5131]: E0107 09:51:51.008999 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.508982052 +0000 UTC m=+139.675283616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.111094 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:51 crc kubenswrapper[5131]: E0107 09:51:51.111415 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.611403139 +0000 UTC m=+139.777704703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.212441 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:51 crc kubenswrapper[5131]: E0107 09:51:51.212587 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.71255861 +0000 UTC m=+139.878860164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.213114 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:51 crc kubenswrapper[5131]: E0107 09:51:51.213502 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.713491502 +0000 UTC m=+139.879793066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.226538 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" event={"ID":"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4","Type":"ContainerStarted","Data":"df9f7b11cf60fa03b49500b61024f159737cd6933dda063ebebbb2608753f64f"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.240298 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" event={"ID":"c61a2db1-fb94-4541-bc6a-57a2f0075072","Type":"ContainerStarted","Data":"020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.241190 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.253594 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" event={"ID":"71dececc-a0db-4099-9449-023def196d45","Type":"ContainerStarted","Data":"baa7e2e219137a852a5725dcfffbf669d5ec488e4a8edc175ee9b9e963e770a1"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.276175 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.291328 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" event={"ID":"6f5047a5-cbaa-4193-a89d-901db9b002d8","Type":"ContainerStarted","Data":"12eebd4427da690fe58a9213eebeed81eccd914257620e0dd1be0df790454e2d"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.296742 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" podStartSLOduration=6.296719781 podStartE2EDuration="6.296719781s" podCreationTimestamp="2026-01-07 09:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.255187195 +0000 UTC m=+139.421488759" watchObservedRunningTime="2026-01-07 09:51:51.296719781 +0000 UTC m=+139.463021365" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.302946 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r" event={"ID":"3aec8df7-9c9b-4f00-9a8d-ab05bbecb4d4","Type":"ContainerStarted","Data":"aefab94f984f620e2a0811d3089a379f348877117a898fd42810568e76b05f04"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.303200 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r" event={"ID":"3aec8df7-9c9b-4f00-9a8d-ab05bbecb4d4","Type":"ContainerStarted","Data":"717c2db8d802d38f49baf5977f544d43255c4834f4048aaf2e6251c048c9276b"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.310477 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" event={"ID":"0d5f65eb-0ec3-427f-9153-62bbc1651bc8","Type":"ContainerStarted","Data":"0326c7ea2353b6f9f8706834a5b520a6fe5e039ced455efbb93281534689e528"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.311897 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h884s" event={"ID":"38b19ba3-6ae3-4eef-9398-6ca8651cc5c1","Type":"ContainerStarted","Data":"0b1720a8323266c13181b1acfee1e97b56abe98c92cb89369d0fc1952db29a1c"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.313687 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:51 crc kubenswrapper[5131]: E0107 09:51:51.314498 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.814468434 +0000 UTC m=+139.980770078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.320795 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-z4875" event={"ID":"25469bc4-e2e1-41c2-9b76-7f084b1feb46","Type":"ContainerStarted","Data":"c20674586c694166aa226270f9b2f1eb3b46b179b452c2e4c0fb69ef75127a3a"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.330319 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" event={"ID":"ea011b17-d07a-47da-9c01-d2a384306bcd","Type":"ContainerStarted","Data":"57bf6e09c5d2c204bc352fc772ea581a2a10c483af209f74d30f419d18dfba2a"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.330557 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.339093 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" event={"ID":"20b2fa4c-8df5-43ac-a56a-397cb97e918d","Type":"ContainerStarted","Data":"664983f2eb04cddc759eaaea7a0f22d6d1847f6614d64b45bdac3483a9d1679b"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.339135 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" event={"ID":"20b2fa4c-8df5-43ac-a56a-397cb97e918d","Type":"ContainerStarted","Data":"6d11905f1c748a453547de95c612af6a27ad6985df315e174f9c7f221594c811"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.346254 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.355911 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-h884s" podStartSLOduration=6.355886375 podStartE2EDuration="6.355886375s" podCreationTimestamp="2026-01-07 09:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.327874063 +0000 UTC m=+139.494175637" watchObservedRunningTime="2026-01-07 09:51:51.355886375 +0000 UTC m=+139.522187939" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.356439 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-b48tj" event={"ID":"2fe92145-224c-4f45-a28e-78caadd67d93","Type":"ContainerStarted","Data":"14f17a87ad2770c1d6cb753e0ae9e7bdfe67ddf6bfacb2758204e10a0feb9e96"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.356764 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-z4875" podStartSLOduration=119.356757154 podStartE2EDuration="1m59.356757154s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.354285363 +0000 UTC m=+139.520586927" watchObservedRunningTime="2026-01-07 09:51:51.356757154 +0000 UTC m=+139.523058728" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.363399 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" event={"ID":"21eae8d8-8c33-4c90-b38d-d3fccae28e7d","Type":"ContainerStarted","Data":"acdf881a0afad818b97b69a7a1cc9f2032e33a0f60a43359bb974f85d690679a"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.364649 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" event={"ID":"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4","Type":"ContainerStarted","Data":"cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.364671 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" event={"ID":"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4","Type":"ContainerStarted","Data":"d9cd023397ab571a5d5a47de7628c6766a688163bf51e0372674ce9a937fc5a2"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.365475 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.375715 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-7dn5b" podStartSLOduration=118.37569759 podStartE2EDuration="1m58.37569759s" podCreationTimestamp="2026-01-07 09:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.3752322 +0000 UTC m=+139.541533764" watchObservedRunningTime="2026-01-07 09:51:51.37569759 +0000 UTC m=+139.541999144" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.393292 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4w78" event={"ID":"de8ef978-428b-4c64-84a1-670939953bae","Type":"ContainerStarted","Data":"7624de73ebbb1fdb20d58f37689eda9c342f08b28dfe20929090757d4858c959"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.400006 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" event={"ID":"8b23000d-6c61-4c26-9d45-4433be4c9408","Type":"ContainerStarted","Data":"380efdc908f725e7d3839b1bf6944833bf21bb12249bfea2338fea3753b33b02"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.401899 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" event={"ID":"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b","Type":"ContainerStarted","Data":"714e8ba68eb0d516d075aa3443a74cb9189a38222a0ad7afe1f9df9be869437c"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.401922 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" event={"ID":"c7ffb596-9bbc-4f89-b4f7-bdd77a5a420b","Type":"ContainerStarted","Data":"370104a6a89fef721b8efbd248384270d0f3fb72e71109a6c75d0265be635880"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.420765 5131 ???:1] "http: TLS handshake error from 192.168.126.11:51012: no serving certificate available for the kubelet" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.421604 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:51 crc kubenswrapper[5131]: E0107 09:51:51.423747 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:51.923732637 +0000 UTC m=+140.090034201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.429734 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" event={"ID":"3444e2f9-d027-4e5d-b655-d564292fb959","Type":"ContainerStarted","Data":"8588848d6279647ecd647b45272e8206c2a5496b1c2d13670467387c6596f2a5"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.430650 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.436550 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-b48tj" podStartSLOduration=6.436536259 podStartE2EDuration="6.436536259s" podCreationTimestamp="2026-01-07 09:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.436076609 +0000 UTC m=+139.602378173" watchObservedRunningTime="2026-01-07 09:51:51.436536259 +0000 UTC m=+139.602837823" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.438314 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-q9xx7" podStartSLOduration=119.438304818 podStartE2EDuration="1m59.438304818s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.409913809 +0000 UTC m=+139.576215373" watchObservedRunningTime="2026-01-07 09:51:51.438304818 +0000 UTC m=+139.604606382" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.446256 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr" event={"ID":"18ffb9d1-d0b4-41bf-84ed-6d47984f831e","Type":"ContainerStarted","Data":"4a6d34e68b9858160543673d9a5841e22e5361fe9a64c5ec59322a25c2eeadd2"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.446312 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr" event={"ID":"18ffb9d1-d0b4-41bf-84ed-6d47984f831e","Type":"ContainerStarted","Data":"cb07fe74382a37fa67c4166240a218114b3bb4379a325cf6b09d3a6493fdfe6b"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.461669 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.462523 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jz8vb" podStartSLOduration=119.46250857 podStartE2EDuration="1m59.46250857s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.461925784 +0000 UTC m=+139.628227368" watchObservedRunningTime="2026-01-07 09:51:51.46250857 +0000 UTC m=+139.628810134" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.474253 5131 generic.go:358] "Generic (PLEG): container finished" podID="92c0b6a3-aea1-4854-9278-710a315edd4f" containerID="d0d40563433e99cdc6281e23ae8e1fb6bbc7046d7e2c3379c2263a9db61d0cd4" exitCode=0 Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.474341 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" event={"ID":"92c0b6a3-aea1-4854-9278-710a315edd4f","Type":"ContainerDied","Data":"d0d40563433e99cdc6281e23ae8e1fb6bbc7046d7e2c3379c2263a9db61d0cd4"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.474366 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" event={"ID":"92c0b6a3-aea1-4854-9278-710a315edd4f","Type":"ContainerStarted","Data":"af635aa7392f99368813011fb3ee7470c30793db644d43cb79d87371e386b4ca"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.482315 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" event={"ID":"45f1adbe-9004-4ad3-b4e2-f8a0c6936502","Type":"ContainerStarted","Data":"abc634fa8667063bb4dcb0379517e75511b3541139a67f834295b67ac26d0d16"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.482349 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" event={"ID":"45f1adbe-9004-4ad3-b4e2-f8a0c6936502","Type":"ContainerStarted","Data":"aeb57e87773a7edd8bb5198fb400a7d8d7adf1527c13e17f4a0c2d7c4aa45b4b"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.482755 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.498095 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" podStartSLOduration=119.49807757 podStartE2EDuration="1m59.49807757s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.497380948 +0000 UTC m=+139.663682512" watchObservedRunningTime="2026-01-07 09:51:51.49807757 +0000 UTC m=+139.664379134" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.513992 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" podStartSLOduration=118.51397967 podStartE2EDuration="1m58.51397967s" podCreationTimestamp="2026-01-07 09:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.513601723 +0000 UTC m=+139.679903297" watchObservedRunningTime="2026-01-07 09:51:51.51397967 +0000 UTC m=+139.680281234" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.517290 5131 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-psjk8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.517342 5131 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" podUID="45f1adbe-9004-4ad3-b4e2-f8a0c6936502" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.528617 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:51 crc kubenswrapper[5131]: E0107 09:51:51.545000 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:52.044978045 +0000 UTC m=+140.211279609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.551823 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" event={"ID":"0bbf073e-c62d-4074-a057-00541ac18caa","Type":"ContainerStarted","Data":"a8d871fac8273a7e4d51cf471187aa3e5f015dbb6711d001e033b5495bef4796"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.551890 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" event={"ID":"0bbf073e-c62d-4074-a057-00541ac18caa","Type":"ContainerStarted","Data":"321de4fd551afe0c7076f775af77bbb6a7ac16d37786eb61e3606591376a97d9"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.561675 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" event={"ID":"c45456da-7004-44dd-8bf8-f3bf8f0fa6f8","Type":"ContainerStarted","Data":"217bc0f528e16c54cdae554941b491aa86e256bbc99bd6dd8ef918207da54ad6"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.563891 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" event={"ID":"683287b8-61e8-4fb7-b688-586df63f560e","Type":"ContainerStarted","Data":"44122a7e76eaebb424572ebe4066e5dddeeaa554728e0d5b490634e7400e097b"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.582768 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" event={"ID":"3e143601-07e2-425b-8478-f27f8045c536","Type":"ContainerStarted","Data":"5df7542b104e4318056b3b37cb65ff5aed7b9d4b70a079a323c642b5cd70dfae"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.585456 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-2zfmr" podStartSLOduration=119.585434583 podStartE2EDuration="1m59.585434583s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.57909093 +0000 UTC m=+139.745392504" watchObservedRunningTime="2026-01-07 09:51:51.585434583 +0000 UTC m=+139.751736147" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.591022 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" event={"ID":"be5efe8d-ba1a-4bc1-b232-9eeff43c3277","Type":"ContainerStarted","Data":"c85b62ce54bbe2bc2f1ea1a4d09925f332abcb11d7bb735f4187101b663aafbc"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.601550 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vb59d" podStartSLOduration=118.601532683 podStartE2EDuration="1m58.601532683s" podCreationTimestamp="2026-01-07 09:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.600434964 +0000 UTC m=+139.766736548" watchObservedRunningTime="2026-01-07 09:51:51.601532683 +0000 UTC m=+139.767834247" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.626585 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-fgq7r" event={"ID":"1586b0f3-181d-4c60-9dae-15afe62d18e3","Type":"ContainerStarted","Data":"a860b150f409029ace68dff2e6904d13010d84a323432e6109ad8a2cf51424d6"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.626626 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-fgq7r" event={"ID":"1586b0f3-181d-4c60-9dae-15afe62d18e3","Type":"ContainerStarted","Data":"d60b2283d0df8a5b4b4f724ead27f63cdb1af51508a847973d2c4687951582fd"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.631646 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" event={"ID":"cb015a21-e0c4-4c90-a563-ec8010ee6bd2","Type":"ContainerStarted","Data":"96cda4ec2cf30ec8fc1fae6090ec963588409bd313bc6ccd6b1c48df1d5f98f1"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.635947 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-vdsqq" event={"ID":"74766801-5e31-42d8-828f-ab317c8cc228","Type":"ContainerStarted","Data":"1ab398a21ef1f6b18f5baf88691081c6f351429180f125cb8f4880b166f124ab"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.635991 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-vdsqq" event={"ID":"74766801-5e31-42d8-828f-ab317c8cc228","Type":"ContainerStarted","Data":"d115a83f5276d88b120055b7bbf6045f088910543f7a4d93e27f6d635adc11c5"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.644301 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.649437 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-vdsqq" Jan 07 09:51:51 crc kubenswrapper[5131]: E0107 09:51:51.649699 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:52.149686614 +0000 UTC m=+140.315988178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.650878 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-rwmlc" podStartSLOduration=119.650862467 podStartE2EDuration="1m59.650862467s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.649322628 +0000 UTC m=+139.815624192" watchObservedRunningTime="2026-01-07 09:51:51.650862467 +0000 UTC m=+139.817164031" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.663888 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" event={"ID":"0fe1be72-61f5-4433-908d-225206c4c7a1","Type":"ContainerStarted","Data":"c96ab8be11506f1d25879b39565b5c0df2ede8a3f64399008c93e5baab501e0c"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.672159 5131 patch_prober.go:28] interesting pod/downloads-747b44746d-vdsqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.672237 5131 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-vdsqq" podUID="74766801-5e31-42d8-828f-ab317c8cc228" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.683934 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-gqz76" podStartSLOduration=119.683919174 podStartE2EDuration="1m59.683919174s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.681243015 +0000 UTC m=+139.847544589" watchObservedRunningTime="2026-01-07 09:51:51.683919174 +0000 UTC m=+139.850220738" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.687440 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" event={"ID":"52bea4d2-c484-40f1-9e1a-635ce6bcfe62","Type":"ContainerStarted","Data":"267fb256e182e12edff58c929fc35b79dbe6f57c235cceaa1f04e8b97698b2d5"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.723681 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" event={"ID":"1697c475-b030-40da-9ed0-7884931c55fd","Type":"ContainerStarted","Data":"fbcc0b4d92087a423b14dffdae57ff1f54fa5b1109f42a876ac1080d378c4598"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.724532 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.730162 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-fgq7r" podStartSLOduration=118.73014337 podStartE2EDuration="1m58.73014337s" podCreationTimestamp="2026-01-07 09:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.729514162 +0000 UTC m=+139.895815736" watchObservedRunningTime="2026-01-07 09:51:51.73014337 +0000 UTC m=+139.896444934" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.746071 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7cl88" event={"ID":"c892059c-f661-4684-9a1e-19e0b0070d24","Type":"ContainerStarted","Data":"da3f77663dc9c151821756016292c8ec070331cc5a2e70cf3f9c26d2dbb86ac5"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.751513 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:51 crc kubenswrapper[5131]: E0107 09:51:51.752883 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:52.252866426 +0000 UTC m=+140.419167990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.771435 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" podStartSLOduration=118.771413114 podStartE2EDuration="1m58.771413114s" podCreationTimestamp="2026-01-07 09:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.762508137 +0000 UTC m=+139.928809701" watchObservedRunningTime="2026-01-07 09:51:51.771413114 +0000 UTC m=+139.937714678" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.783584 5131 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-mrfk7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.783644 5131 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" podUID="1697c475-b030-40da-9ed0-7884931c55fd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.827569 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" podStartSLOduration=119.827551183 podStartE2EDuration="1m59.827551183s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.823147036 +0000 UTC m=+139.989448600" watchObservedRunningTime="2026-01-07 09:51:51.827551183 +0000 UTC m=+139.993852747" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.828649 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-vdsqq" podStartSLOduration=119.828644042 podStartE2EDuration="1m59.828644042s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.804755424 +0000 UTC m=+139.971056988" watchObservedRunningTime="2026-01-07 09:51:51.828644042 +0000 UTC m=+139.994945606" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.853478 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:51 crc kubenswrapper[5131]: E0107 09:51:51.855393 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:52.355380517 +0000 UTC m=+140.521682081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.862092 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" event={"ID":"c39b15df-a1bc-4922-9712-8fba72c00fdf","Type":"ContainerStarted","Data":"c2ea2942059f749eff9383794f1ffe51775d61caf94ddd86d34f8c74e2e7fbbe"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.887138 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l2qqh" podStartSLOduration=119.887121145 podStartE2EDuration="1m59.887121145s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.878482679 +0000 UTC m=+140.044784243" watchObservedRunningTime="2026-01-07 09:51:51.887121145 +0000 UTC m=+140.053422709" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.887978 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" event={"ID":"78325b9f-50a6-4dac-90a8-d28091bb5104","Type":"ContainerStarted","Data":"7458282655deb889186a00ae7818e12b342b859f37f591b219f55e2a8675ea2d"} Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.888483 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-4ktsk" podStartSLOduration=119.888477596 podStartE2EDuration="1m59.888477596s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:51.856034136 +0000 UTC m=+140.022335700" watchObservedRunningTime="2026-01-07 09:51:51.888477596 +0000 UTC m=+140.054779160" Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.955920 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:51 crc kubenswrapper[5131]: E0107 09:51:51.957036 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:52.457005358 +0000 UTC m=+140.623306922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:51 crc kubenswrapper[5131]: I0107 09:51:51.982425 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.009030 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.017961 5131 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-z4875 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 09:51:52 crc kubenswrapper[5131]: [-]has-synced failed: reason withheld Jan 07 09:51:52 crc kubenswrapper[5131]: [+]process-running ok Jan 07 09:51:52 crc kubenswrapper[5131]: healthz check failed Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.018010 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-z4875" podUID="25469bc4-e2e1-41c2-9b76-7f084b1feb46" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.062526 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:52 crc kubenswrapper[5131]: E0107 09:51:52.063868 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:52.563852313 +0000 UTC m=+140.730153877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.103257 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.103988 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.142324 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.165440 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:52 crc kubenswrapper[5131]: E0107 09:51:52.165713 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:52.665696624 +0000 UTC m=+140.831998188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.262152 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-grvm4"] Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.292495 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:52 crc kubenswrapper[5131]: E0107 09:51:52.292969 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:52.792955231 +0000 UTC m=+140.959256795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.393898 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:52 crc kubenswrapper[5131]: E0107 09:51:52.393970 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:52.893950474 +0000 UTC m=+141.060252028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.394377 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:52 crc kubenswrapper[5131]: E0107 09:51:52.394728 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:52.894720639 +0000 UTC m=+141.061022203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.495355 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:52 crc kubenswrapper[5131]: E0107 09:51:52.495725 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:52.995706032 +0000 UTC m=+141.162007596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.578858 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-db2q2"] Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.593286 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.596721 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:52 crc kubenswrapper[5131]: E0107 09:51:52.597022 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:53.097010449 +0000 UTC m=+141.263312003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.610575 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.613109 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-db2q2"] Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.699402 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.699575 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-catalog-content\") pod \"community-operators-db2q2\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.699617 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcj8g\" (UniqueName: \"kubernetes.io/projected/07c53e69-8037-4261-a288-5f4505e6f7e5-kube-api-access-bcj8g\") pod \"community-operators-db2q2\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.699651 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-utilities\") pod \"community-operators-db2q2\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:51:52 crc kubenswrapper[5131]: E0107 09:51:52.699767 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:53.19975004 +0000 UTC m=+141.366051604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.772963 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l9wkb"] Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.778851 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.782198 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.790850 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9wkb"] Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.806550 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-catalog-content\") pod \"community-operators-db2q2\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.806602 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bcj8g\" (UniqueName: \"kubernetes.io/projected/07c53e69-8037-4261-a288-5f4505e6f7e5-kube-api-access-bcj8g\") pod \"community-operators-db2q2\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.806625 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.806657 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-utilities\") pod \"community-operators-db2q2\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.807105 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-utilities\") pod \"community-operators-db2q2\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.807310 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-catalog-content\") pod \"community-operators-db2q2\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:51:52 crc kubenswrapper[5131]: E0107 09:51:52.807551 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:53.307539697 +0000 UTC m=+141.473841271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.853542 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcj8g\" (UniqueName: \"kubernetes.io/projected/07c53e69-8037-4261-a288-5f4505e6f7e5-kube-api-access-bcj8g\") pod \"community-operators-db2q2\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.909081 5131 generic.go:358] "Generic (PLEG): container finished" podID="78325b9f-50a6-4dac-90a8-d28091bb5104" containerID="68d80c6cc8ff9c0133f87fa152d81e436530c342ebdc369dd3100374c4940bac" exitCode=0 Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.909170 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" event={"ID":"78325b9f-50a6-4dac-90a8-d28091bb5104","Type":"ContainerDied","Data":"68d80c6cc8ff9c0133f87fa152d81e436530c342ebdc369dd3100374c4940bac"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.909373 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.909508 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t58j6\" (UniqueName: \"kubernetes.io/projected/b8cac87e-c013-4988-a977-5b1f038c1d34-kube-api-access-t58j6\") pod \"certified-operators-l9wkb\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.909570 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-utilities\") pod \"certified-operators-l9wkb\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.909631 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-catalog-content\") pod \"certified-operators-l9wkb\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:51:52 crc kubenswrapper[5131]: E0107 09:51:52.909847 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:53.409809807 +0000 UTC m=+141.576111371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.914382 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" event={"ID":"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4","Type":"ContainerStarted","Data":"ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.915416 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.918523 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" event={"ID":"71dececc-a0db-4099-9449-023def196d45","Type":"ContainerStarted","Data":"0ff0d24237074ce2531be2164b44d18c4f4ad23be534b521d02b5f17b0f930d5"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.918563 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" event={"ID":"71dececc-a0db-4099-9449-023def196d45","Type":"ContainerStarted","Data":"b243188c663c221df7e45c13f70f68259f8eddd650134f6e3c45df7763206e6b"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.918605 5131 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-sftp2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.38:6443/healthz\": dial tcp 10.217.0.38:6443: connect: connection refused" start-of-body= Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.918642 5131 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" podUID="8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.38:6443/healthz\": dial tcp 10.217.0.38:6443: connect: connection refused" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.936382 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" event={"ID":"6f5047a5-cbaa-4193-a89d-901db9b002d8","Type":"ContainerStarted","Data":"cd7eccaf5b40c91537741171e0c4006c1e54756bd174ea443f2e432571dd7f5f"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.938658 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r" event={"ID":"3aec8df7-9c9b-4f00-9a8d-ab05bbecb4d4","Type":"ContainerStarted","Data":"aa79d9a9c80399cc8191e0a9896e099f9dd1b43406d667e61fe8a0148bfc09d8"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.941093 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" event={"ID":"0d5f65eb-0ec3-427f-9153-62bbc1651bc8","Type":"ContainerStarted","Data":"0f7d606ba8a0ab4613d727e484c7d0c506e9b1f41756f7daacf5f8c399f84b85"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.941229 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" event={"ID":"0d5f65eb-0ec3-427f-9153-62bbc1651bc8","Type":"ContainerStarted","Data":"93deda75df97da7f39ae3466f2079bbde3f42528d43197ab8af85e0dda0431f7"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.942507 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-b48tj" event={"ID":"2fe92145-224c-4f45-a28e-78caadd67d93","Type":"ContainerStarted","Data":"7c1c63e877a052a5a1720619ec1743578ffce218c094122a56a74aaecab149a0"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.945741 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" event={"ID":"21eae8d8-8c33-4c90-b38d-d3fccae28e7d","Type":"ContainerStarted","Data":"cfc2d994ba1076286343da6dd681a87bedf560bbebb447f4095ed1c43cf0feda"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.966094 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.966528 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4w78" event={"ID":"de8ef978-428b-4c64-84a1-670939953bae","Type":"ContainerStarted","Data":"a3c3a788399d37ffc9540f15eae8929565c1eeaf715abe60ac8d8a4d9ebaa03e"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.966567 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h4w78" event={"ID":"de8ef978-428b-4c64-84a1-670939953bae","Type":"ContainerStarted","Data":"05ff7730e6bfa6b9b45fdba129d9c9f5b71d4352a116f76ab8be75f2171335db"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.966754 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-h4w78" Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.972670 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" event={"ID":"8b23000d-6c61-4c26-9d45-4433be4c9408","Type":"ContainerStarted","Data":"c459edd578a5999d3a1c57b701d40c593d1626c05db4165045c2b5fd72344b60"} Jan 07 09:51:52 crc kubenswrapper[5131]: I0107 09:51:52.979354 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rv52j"] Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.001349 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.002464 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" event={"ID":"92c0b6a3-aea1-4854-9278-710a315edd4f","Type":"ContainerStarted","Data":"6a8f38fbe75ea301509761c23be8ff23594e45757fd845ceae5164dbd34b0731"} Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.010418 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-pxdmv" podStartSLOduration=121.010395732 podStartE2EDuration="2m1.010395732s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.002901147 +0000 UTC m=+141.169202701" watchObservedRunningTime="2026-01-07 09:51:53.010395732 +0000 UTC m=+141.176697296" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.013820 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-catalog-content\") pod \"certified-operators-l9wkb\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.029128 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.014051 5131 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-z4875 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 09:51:53 crc kubenswrapper[5131]: [-]has-synced failed: reason withheld Jan 07 09:51:53 crc kubenswrapper[5131]: [+]process-running ok Jan 07 09:51:53 crc kubenswrapper[5131]: healthz check failed Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.031135 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-z4875" podUID="25469bc4-e2e1-41c2-9b76-7f084b1feb46" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 09:51:53 crc kubenswrapper[5131]: E0107 09:51:53.032359 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:53.532344253 +0000 UTC m=+141.698645807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.016860 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-catalog-content\") pod \"certified-operators-l9wkb\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.033357 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t58j6\" (UniqueName: \"kubernetes.io/projected/b8cac87e-c013-4988-a977-5b1f038c1d34-kube-api-access-t58j6\") pod \"certified-operators-l9wkb\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.033683 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-utilities\") pod \"certified-operators-l9wkb\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.037937 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-df6lk" event={"ID":"c45456da-7004-44dd-8bf8-f3bf8f0fa6f8","Type":"ContainerStarted","Data":"e82c601b0fd7dda2755841e0436eb2c2654ede28a9058b0c0a4e97c39fd0ab6a"} Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.038890 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-utilities\") pod \"certified-operators-l9wkb\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.053957 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rv52j"] Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.078265 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-4nqqd" podStartSLOduration=121.078230164 podStartE2EDuration="2m1.078230164s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.042736557 +0000 UTC m=+141.209038121" watchObservedRunningTime="2026-01-07 09:51:53.078230164 +0000 UTC m=+141.244531728" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.079701 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-hhbw5" podStartSLOduration=121.079693849 podStartE2EDuration="2m1.079693849s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.069345427 +0000 UTC m=+141.235646991" watchObservedRunningTime="2026-01-07 09:51:53.079693849 +0000 UTC m=+141.245995413" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.080024 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" event={"ID":"683287b8-61e8-4fb7-b688-586df63f560e","Type":"ContainerStarted","Data":"0c79daedbaf6e861b3c84bf31393987cdf5f80a596cbe7f6558d90c84a5afa18"} Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.080147 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" event={"ID":"683287b8-61e8-4fb7-b688-586df63f560e","Type":"ContainerStarted","Data":"3b39a5b6c98832c1f41687b5a1fc03ad616fb4d451464fcc3680f298c92e14c2"} Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.081007 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t58j6\" (UniqueName: \"kubernetes.io/projected/b8cac87e-c013-4988-a977-5b1f038c1d34-kube-api-access-t58j6\") pod \"certified-operators-l9wkb\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.094058 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" event={"ID":"3e143601-07e2-425b-8478-f27f8045c536","Type":"ContainerStarted","Data":"e027411410d4d9d7f4e283518ddf08c97e794521af4774ecb75fda9b279ebe13"} Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.099559 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.100278 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" podStartSLOduration=120.100261398 podStartE2EDuration="2m0.100261398s" podCreationTimestamp="2026-01-07 09:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.098764041 +0000 UTC m=+141.265065595" watchObservedRunningTime="2026-01-07 09:51:53.100261398 +0000 UTC m=+141.266562962" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.115001 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" event={"ID":"be5efe8d-ba1a-4bc1-b232-9eeff43c3277","Type":"ContainerStarted","Data":"c53a54d2dc1f5708801ec2a256fd8a11556cd3a91deea8a559cd9040a196f4bd"} Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.136131 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.136341 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-utilities\") pod \"community-operators-rv52j\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.136549 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-catalog-content\") pod \"community-operators-rv52j\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.136570 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhwk8\" (UniqueName: \"kubernetes.io/projected/17b0639f-c0b9-4140-af54-4da733719edb-kube-api-access-rhwk8\") pod \"community-operators-rv52j\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:51:53 crc kubenswrapper[5131]: E0107 09:51:53.137629 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:53.637612627 +0000 UTC m=+141.803914191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.138206 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wm88r" podStartSLOduration=121.138193603 podStartE2EDuration="2m1.138193603s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.136444515 +0000 UTC m=+141.302746079" watchObservedRunningTime="2026-01-07 09:51:53.138193603 +0000 UTC m=+141.304495167" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.158038 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" event={"ID":"52bea4d2-c484-40f1-9e1a-635ce6bcfe62","Type":"ContainerStarted","Data":"7d529dec12efeeb4219d50f237568fcf4bc97f3ab4a9cff837d54e70e4d4c37a"} Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.158098 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" event={"ID":"52bea4d2-c484-40f1-9e1a-635ce6bcfe62","Type":"ContainerStarted","Data":"f25fbc58dbcde68a959a002df63d8a00a7f28cbc2dc26e77d2ea807236abfd29"} Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.162528 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.168753 5131 patch_prober.go:28] interesting pod/downloads-747b44746d-vdsqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.168813 5131 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-vdsqq" podUID="74766801-5e31-42d8-828f-ab317c8cc228" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.178290 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" podStartSLOduration=121.178275584 podStartE2EDuration="2m1.178275584s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.177124273 +0000 UTC m=+141.343425837" watchObservedRunningTime="2026-01-07 09:51:53.178275584 +0000 UTC m=+141.344577148" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.178798 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-k5x25" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.187367 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.187865 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-psjk8" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.198792 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5fgb6"] Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.209123 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.227518 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fgb6"] Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.238159 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2frr\" (UniqueName: \"kubernetes.io/projected/3c9c707f-f88b-4ba9-9722-51779966c49b-kube-api-access-j2frr\") pod \"certified-operators-5fgb6\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.238282 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.238395 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-catalog-content\") pod \"community-operators-rv52j\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.238469 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rhwk8\" (UniqueName: \"kubernetes.io/projected/17b0639f-c0b9-4140-af54-4da733719edb-kube-api-access-rhwk8\") pod \"community-operators-rv52j\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.238756 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-utilities\") pod \"community-operators-rv52j\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.238811 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-utilities\") pod \"certified-operators-5fgb6\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.238857 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-catalog-content\") pod \"community-operators-rv52j\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.239283 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-catalog-content\") pod \"certified-operators-5fgb6\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:51:53 crc kubenswrapper[5131]: E0107 09:51:53.240508 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:53.740495215 +0000 UTC m=+141.906796779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.243585 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-utilities\") pod \"community-operators-rv52j\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.267149 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" podStartSLOduration=121.267132835 podStartE2EDuration="2m1.267132835s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.231185419 +0000 UTC m=+141.397486993" watchObservedRunningTime="2026-01-07 09:51:53.267132835 +0000 UTC m=+141.433434399" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.267456 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-h4w78" podStartSLOduration=8.26745106 podStartE2EDuration="8.26745106s" podCreationTimestamp="2026-01-07 09:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.266176443 +0000 UTC m=+141.432478007" watchObservedRunningTime="2026-01-07 09:51:53.26745106 +0000 UTC m=+141.433752624" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.278796 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhwk8\" (UniqueName: \"kubernetes.io/projected/17b0639f-c0b9-4140-af54-4da733719edb-kube-api-access-rhwk8\") pod \"community-operators-rv52j\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.326725 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-lbptl" podStartSLOduration=121.326711868 podStartE2EDuration="2m1.326711868s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.303680579 +0000 UTC m=+141.469982133" watchObservedRunningTime="2026-01-07 09:51:53.326711868 +0000 UTC m=+141.493013432" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.327736 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" podStartSLOduration=120.327731283 podStartE2EDuration="2m0.327731283s" podCreationTimestamp="2026-01-07 09:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.325561476 +0000 UTC m=+141.491863030" watchObservedRunningTime="2026-01-07 09:51:53.327731283 +0000 UTC m=+141.494032847" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.344282 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.344858 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-utilities\") pod \"certified-operators-5fgb6\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.344924 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-catalog-content\") pod \"certified-operators-5fgb6\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.344963 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j2frr\" (UniqueName: \"kubernetes.io/projected/3c9c707f-f88b-4ba9-9722-51779966c49b-kube-api-access-j2frr\") pod \"certified-operators-5fgb6\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:51:53 crc kubenswrapper[5131]: E0107 09:51:53.345301 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:53.845284278 +0000 UTC m=+142.011585842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.345621 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-utilities\") pod \"certified-operators-5fgb6\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.345846 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-catalog-content\") pod \"certified-operators-5fgb6\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.372024 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2frr\" (UniqueName: \"kubernetes.io/projected/3c9c707f-f88b-4ba9-9722-51779966c49b-kube-api-access-j2frr\") pod \"certified-operators-5fgb6\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.379285 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-mh9sm" podStartSLOduration=121.379271297 podStartE2EDuration="2m1.379271297s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.353348218 +0000 UTC m=+141.519649782" watchObservedRunningTime="2026-01-07 09:51:53.379271297 +0000 UTC m=+141.545572851" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.381401 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-4rj8b" podStartSLOduration=121.381393782 podStartE2EDuration="2m1.381393782s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.378742523 +0000 UTC m=+141.545044087" watchObservedRunningTime="2026-01-07 09:51:53.381393782 +0000 UTC m=+141.547695346" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.411601 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.432287 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hxnn7" podStartSLOduration=121.432266645 podStartE2EDuration="2m1.432266645s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:53.424005296 +0000 UTC m=+141.590306860" watchObservedRunningTime="2026-01-07 09:51:53.432266645 +0000 UTC m=+141.598568209" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.446354 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:53 crc kubenswrapper[5131]: E0107 09:51:53.446851 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:53.946821975 +0000 UTC m=+142.113123539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.530231 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9wkb"] Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.550499 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:53 crc kubenswrapper[5131]: E0107 09:51:53.550761 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.05074608 +0000 UTC m=+142.217047644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.551239 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.551519 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.565080 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.634319 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-db2q2"] Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.651807 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:53 crc kubenswrapper[5131]: E0107 09:51:53.652095 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.152083408 +0000 UTC m=+142.318384972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.756383 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:53 crc kubenswrapper[5131]: E0107 09:51:53.756651 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.2566213 +0000 UTC m=+142.422922854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.757133 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:53 crc kubenswrapper[5131]: E0107 09:51:53.757388 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.257380424 +0000 UTC m=+142.423681988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.857712 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:53 crc kubenswrapper[5131]: E0107 09:51:53.858128 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.358108525 +0000 UTC m=+142.524410089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:53 crc kubenswrapper[5131]: I0107 09:51:53.960811 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:53 crc kubenswrapper[5131]: E0107 09:51:53.961196 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.461180351 +0000 UTC m=+142.627481915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.010066 5131 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-z4875 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 07 09:51:54 crc kubenswrapper[5131]: [-]has-synced failed: reason withheld Jan 07 09:51:54 crc kubenswrapper[5131]: [+]process-running ok Jan 07 09:51:54 crc kubenswrapper[5131]: healthz check failed Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.010302 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-z4875" podUID="25469bc4-e2e1-41c2-9b76-7f084b1feb46" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.022178 5131 ???:1] "http: TLS handshake error from 192.168.126.11:51018: no serving certificate available for the kubelet" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.063123 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.063220 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.56320399 +0000 UTC m=+142.729505554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.063473 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.063739 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.563732304 +0000 UTC m=+142.730033868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.103291 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.106947 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fgb6"] Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.166907 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.167418 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.167548 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.172811 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.672790828 +0000 UTC m=+142.839092392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.173753 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.173978 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.174087 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.174458 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.674445712 +0000 UTC m=+142.840747276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.175504 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.179261 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.197403 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.199745 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.201002 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rv52j"] Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.208199 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fgb6" event={"ID":"3c9c707f-f88b-4ba9-9722-51779966c49b","Type":"ContainerStarted","Data":"406ab0cd1edbec233068e5e73ea50c9ff28765a5a0e3f5b694c2d394df3a2a8d"} Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.213029 5131 generic.go:358] "Generic (PLEG): container finished" podID="b8cac87e-c013-4988-a977-5b1f038c1d34" containerID="3cc8e8df055fc6b9073a8c04748d0999b2ec523d927ac26e4118eaa9c9b3ab2f" exitCode=0 Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.213114 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9wkb" event={"ID":"b8cac87e-c013-4988-a977-5b1f038c1d34","Type":"ContainerDied","Data":"3cc8e8df055fc6b9073a8c04748d0999b2ec523d927ac26e4118eaa9c9b3ab2f"} Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.213137 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9wkb" event={"ID":"b8cac87e-c013-4988-a977-5b1f038c1d34","Type":"ContainerStarted","Data":"1aeed343e3056936b6813c508d456224884975f42dee11f7088d113bf0ee41f5"} Jan 07 09:51:54 crc kubenswrapper[5131]: W0107 09:51:54.218796 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b0639f_c0b9_4140_af54_4da733719edb.slice/crio-9aefa76d24d07e32678ae078b945929126e192e4706aa2efa662ce57574dfa90 WatchSource:0}: Error finding container 9aefa76d24d07e32678ae078b945929126e192e4706aa2efa662ce57574dfa90: Status 404 returned error can't find the container with id 9aefa76d24d07e32678ae078b945929126e192e4706aa2efa662ce57574dfa90 Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.220104 5131 generic.go:358] "Generic (PLEG): container finished" podID="07c53e69-8037-4261-a288-5f4505e6f7e5" containerID="697b3d6414dd3ec4cce467df2b3ccd3a0f454800e497b97f5e1c4df4bdf4b8b4" exitCode=0 Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.220173 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db2q2" event={"ID":"07c53e69-8037-4261-a288-5f4505e6f7e5","Type":"ContainerDied","Data":"697b3d6414dd3ec4cce467df2b3ccd3a0f454800e497b97f5e1c4df4bdf4b8b4"} Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.220197 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db2q2" event={"ID":"07c53e69-8037-4261-a288-5f4505e6f7e5","Type":"ContainerStarted","Data":"60e976610e387b1508ad0a187e081731c921de8bf370d4a976a0583047b8e088"} Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.250338 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" event={"ID":"78325b9f-50a6-4dac-90a8-d28091bb5104","Type":"ContainerStarted","Data":"8adbb79200332ae4b14c2aeb5ebd0ee9f867479c91008692de11909d517bc0ca"} Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.253700 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.253951 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" podUID="c61a2db1-fb94-4541-bc6a-57a2f0075072" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" gracePeriod=30 Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.255518 5131 patch_prober.go:28] interesting pod/downloads-747b44746d-vdsqq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.255563 5131 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-vdsqq" podUID="74766801-5e31-42d8-828f-ab317c8cc228" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.256666 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jk2mp" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.266003 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.275888 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.276037 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.7760101 +0000 UTC m=+142.942311664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.280774 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.281092 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.781076947 +0000 UTC m=+142.947378511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.281479 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.295747 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e-metrics-certs\") pod \"network-metrics-daemon-5cj94\" (UID: \"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e\") " pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.314825 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" podStartSLOduration=122.314808274 podStartE2EDuration="2m2.314808274s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:54.311408492 +0000 UTC m=+142.477710056" watchObservedRunningTime="2026-01-07 09:51:54.314808274 +0000 UTC m=+142.481109838" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.382413 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.382743 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.882725709 +0000 UTC m=+143.049027273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.408768 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.427988 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.443016 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.455661 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cj94" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.483510 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.483827 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:54.983813716 +0000 UTC m=+143.150115280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.584887 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.585310 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:55.085295111 +0000 UTC m=+143.251596675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.686902 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.687206 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:55.187194475 +0000 UTC m=+143.353496039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.780908 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t8cf2"] Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.788306 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.788677 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:55.288661849 +0000 UTC m=+143.454963403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.789326 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.793918 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.808948 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8cf2"] Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.892747 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-utilities\") pod \"redhat-marketplace-t8cf2\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.893055 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.893078 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-catalog-content\") pod \"redhat-marketplace-t8cf2\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.893108 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvnnv\" (UniqueName: \"kubernetes.io/projected/f04172ba-2c1f-4d8f-b742-7d182136ca81-kube-api-access-fvnnv\") pod \"redhat-marketplace-t8cf2\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.893384 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:55.393373558 +0000 UTC m=+143.559675122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: W0107 09:51:54.955262 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-07d896ec4fab37fc7385ff6bcd6c7f06fe8df6a122b47e27b97272208921816a WatchSource:0}: Error finding container 07d896ec4fab37fc7385ff6bcd6c7f06fe8df6a122b47e27b97272208921816a: Status 404 returned error can't find the container with id 07d896ec4fab37fc7385ff6bcd6c7f06fe8df6a122b47e27b97272208921816a Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.996669 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.996921 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-catalog-content\") pod \"redhat-marketplace-t8cf2\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.996947 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fvnnv\" (UniqueName: \"kubernetes.io/projected/f04172ba-2c1f-4d8f-b742-7d182136ca81-kube-api-access-fvnnv\") pod \"redhat-marketplace-t8cf2\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.996991 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-utilities\") pod \"redhat-marketplace-t8cf2\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.997420 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-utilities\") pod \"redhat-marketplace-t8cf2\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:51:54 crc kubenswrapper[5131]: E0107 09:51:54.997489 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:55.49747371 +0000 UTC m=+143.663775274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:54 crc kubenswrapper[5131]: I0107 09:51:54.997694 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-catalog-content\") pod \"redhat-marketplace-t8cf2\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.015694 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:55 crc kubenswrapper[5131]: W0107 09:51:55.045037 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-67cfad44896d07932f4433a7dc4f9dd66c7c7510cdb86e91e48a7d90d6310982 WatchSource:0}: Error finding container 67cfad44896d07932f4433a7dc4f9dd66c7c7510cdb86e91e48a7d90d6310982: Status 404 returned error can't find the container with id 67cfad44896d07932f4433a7dc4f9dd66c7c7510cdb86e91e48a7d90d6310982 Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.046208 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvnnv\" (UniqueName: \"kubernetes.io/projected/f04172ba-2c1f-4d8f-b742-7d182136ca81-kube-api-access-fvnnv\") pod \"redhat-marketplace-t8cf2\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.053674 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5cj94"] Jan 07 09:51:55 crc kubenswrapper[5131]: W0107 09:51:55.078728 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca8a9254_0eaf_44d8_9d02_4d1ce8b27e7e.slice/crio-1b4bd4640ad4a83168c2219791a93487c2f406abc9d4ad9b0dae0f8e8ea4c81a WatchSource:0}: Error finding container 1b4bd4640ad4a83168c2219791a93487c2f406abc9d4ad9b0dae0f8e8ea4c81a: Status 404 returned error can't find the container with id 1b4bd4640ad4a83168c2219791a93487c2f406abc9d4ad9b0dae0f8e8ea4c81a Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.100739 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:55 crc kubenswrapper[5131]: E0107 09:51:55.101096 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:55.60107928 +0000 UTC m=+143.767380844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.140655 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.196677 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rq7z2"] Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.205270 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:55 crc kubenswrapper[5131]: E0107 09:51:55.205626 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:55.705609872 +0000 UTC m=+143.871911436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.206288 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rq7z2"] Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.206455 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.275380 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"9731b064a8b6940eda2342a9649df6b2d3c08b06eb1f151f5bd601066fea2aac"} Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.279958 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"67cfad44896d07932f4433a7dc4f9dd66c7c7510cdb86e91e48a7d90d6310982"} Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.300343 5131 generic.go:358] "Generic (PLEG): container finished" podID="3c9c707f-f88b-4ba9-9722-51779966c49b" containerID="8d8743f8e106258443b19b6b54860ddc9161e16ccad91cd2b4093fca46185564" exitCode=0 Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.300444 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fgb6" event={"ID":"3c9c707f-f88b-4ba9-9722-51779966c49b","Type":"ContainerDied","Data":"8d8743f8e106258443b19b6b54860ddc9161e16ccad91cd2b4093fca46185564"} Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.306237 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-utilities\") pod \"redhat-marketplace-rq7z2\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.306456 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-catalog-content\") pod \"redhat-marketplace-rq7z2\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.306506 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp8lr\" (UniqueName: \"kubernetes.io/projected/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-kube-api-access-vp8lr\") pod \"redhat-marketplace-rq7z2\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.306591 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:55 crc kubenswrapper[5131]: E0107 09:51:55.306879 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:55.806867807 +0000 UTC m=+143.973169361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.310451 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5cj94" event={"ID":"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e","Type":"ContainerStarted","Data":"1b4bd4640ad4a83168c2219791a93487c2f406abc9d4ad9b0dae0f8e8ea4c81a"} Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.324790 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"896b673a7d5aef45b16d6e8546d8ecaa57ec238e4699569f803d680359b4674a"} Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.324851 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"07d896ec4fab37fc7385ff6bcd6c7f06fe8df6a122b47e27b97272208921816a"} Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.335677 5131 generic.go:358] "Generic (PLEG): container finished" podID="17b0639f-c0b9-4140-af54-4da733719edb" containerID="a06c9983858c1d3af015868ff0c1db73693d4788b323d4d88f5660cb86b80176" exitCode=0 Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.339552 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv52j" event={"ID":"17b0639f-c0b9-4140-af54-4da733719edb","Type":"ContainerDied","Data":"a06c9983858c1d3af015868ff0c1db73693d4788b323d4d88f5660cb86b80176"} Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.340481 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.340528 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv52j" event={"ID":"17b0639f-c0b9-4140-af54-4da733719edb","Type":"ContainerStarted","Data":"9aefa76d24d07e32678ae078b945929126e192e4706aa2efa662ce57574dfa90"} Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.342626 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-z4875" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.407067 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.407233 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-utilities\") pod \"redhat-marketplace-rq7z2\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.407293 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-catalog-content\") pod \"redhat-marketplace-rq7z2\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.407398 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vp8lr\" (UniqueName: \"kubernetes.io/projected/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-kube-api-access-vp8lr\") pod \"redhat-marketplace-rq7z2\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:51:55 crc kubenswrapper[5131]: E0107 09:51:55.408460 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:55.908445036 +0000 UTC m=+144.074746600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.412118 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-utilities\") pod \"redhat-marketplace-rq7z2\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.413342 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-catalog-content\") pod \"redhat-marketplace-rq7z2\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.452150 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp8lr\" (UniqueName: \"kubernetes.io/projected/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-kube-api-access-vp8lr\") pod \"redhat-marketplace-rq7z2\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.508801 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:55 crc kubenswrapper[5131]: E0107 09:51:55.509142 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:56.009130345 +0000 UTC m=+144.175431909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.516245 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8cf2"] Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.553084 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.609736 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:55 crc kubenswrapper[5131]: E0107 09:51:55.609978 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:56.109961721 +0000 UTC m=+144.276263285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.629884 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.637175 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.641713 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.641960 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.647930 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.711079 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f16686c-cacb-409a-a551-b29b54a60782-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6f16686c-cacb-409a-a551-b29b54a60782\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.711160 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.711186 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f16686c-cacb-409a-a551-b29b54a60782-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6f16686c-cacb-409a-a551-b29b54a60782\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 07 09:51:55 crc kubenswrapper[5131]: E0107 09:51:55.711462 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:56.211438326 +0000 UTC m=+144.377739890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.774598 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cbvmr"] Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.785098 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.786230 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbvmr"] Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.789576 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.813650 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.813816 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn5wz\" (UniqueName: \"kubernetes.io/projected/4a44502e-cd8c-4525-95f6-33c1eab86d42-kube-api-access-qn5wz\") pod \"redhat-operators-cbvmr\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.813915 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f16686c-cacb-409a-a551-b29b54a60782-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6f16686c-cacb-409a-a551-b29b54a60782\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 07 09:51:55 crc kubenswrapper[5131]: E0107 09:51:55.814221 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:56.314203979 +0000 UTC m=+144.480505543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.814774 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-utilities\") pod \"redhat-operators-cbvmr\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.814818 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-catalog-content\") pod \"redhat-operators-cbvmr\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.815017 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f16686c-cacb-409a-a551-b29b54a60782-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6f16686c-cacb-409a-a551-b29b54a60782\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.815215 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f16686c-cacb-409a-a551-b29b54a60782-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6f16686c-cacb-409a-a551-b29b54a60782\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.838365 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f16686c-cacb-409a-a551-b29b54a60782-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6f16686c-cacb-409a-a551-b29b54a60782\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.916457 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.916536 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-utilities\") pod \"redhat-operators-cbvmr\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.916572 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-catalog-content\") pod \"redhat-operators-cbvmr\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.916625 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qn5wz\" (UniqueName: \"kubernetes.io/projected/4a44502e-cd8c-4525-95f6-33c1eab86d42-kube-api-access-qn5wz\") pod \"redhat-operators-cbvmr\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.917659 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-utilities\") pod \"redhat-operators-cbvmr\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.917689 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-catalog-content\") pod \"redhat-operators-cbvmr\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:51:55 crc kubenswrapper[5131]: E0107 09:51:55.926947 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:56.426928076 +0000 UTC m=+144.593229640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.953805 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn5wz\" (UniqueName: \"kubernetes.io/projected/4a44502e-cd8c-4525-95f6-33c1eab86d42-kube-api-access-qn5wz\") pod \"redhat-operators-cbvmr\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:51:55 crc kubenswrapper[5131]: I0107 09:51:55.984561 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6gww5"] Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.009057 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.010643 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gww5"] Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.010969 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.017389 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rq7z2"] Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.017427 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.017664 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-utilities\") pod \"redhat-operators-6gww5\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.017740 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x848k\" (UniqueName: \"kubernetes.io/projected/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-kube-api-access-x848k\") pod \"redhat-operators-6gww5\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.017778 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-catalog-content\") pod \"redhat-operators-6gww5\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.017898 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:56.517881631 +0000 UTC m=+144.684183195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:56 crc kubenswrapper[5131]: W0107 09:51:56.054247 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b21d118_e577_4bf0_a27c_f8fe3f05adc6.slice/crio-41fff9a89a231ed4eed25db1f13ec50779c0c1d0fb98945ab4d7601b67b492b0 WatchSource:0}: Error finding container 41fff9a89a231ed4eed25db1f13ec50779c0c1d0fb98945ab4d7601b67b492b0: Status 404 returned error can't find the container with id 41fff9a89a231ed4eed25db1f13ec50779c0c1d0fb98945ab4d7601b67b492b0 Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.119018 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x848k\" (UniqueName: \"kubernetes.io/projected/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-kube-api-access-x848k\") pod \"redhat-operators-6gww5\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.119082 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.119113 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-catalog-content\") pod \"redhat-operators-6gww5\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.119482 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:56.61946493 +0000 UTC m=+144.785766494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.119482 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-utilities\") pod \"redhat-operators-6gww5\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.119583 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-catalog-content\") pod \"redhat-operators-6gww5\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.119742 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-utilities\") pod \"redhat-operators-6gww5\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.137959 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x848k\" (UniqueName: \"kubernetes.io/projected/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-kube-api-access-x848k\") pod \"redhat-operators-6gww5\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.142062 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.221579 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.221804 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:56.721788803 +0000 UTC m=+144.888090367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.305266 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.327983 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.328439 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:56.828401747 +0000 UTC m=+144.994703311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.329798 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.346778 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5cj94" event={"ID":"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e","Type":"ContainerStarted","Data":"ced4b71939f0751f21166df460f7fa9ddc9747d00ad07f25557db42761ab662e"} Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.346824 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5cj94" event={"ID":"ca8a9254-0eaf-44d8-9d02-4d1ce8b27e7e","Type":"ContainerStarted","Data":"405e83e3bfa2ac6d73a78536ce7aa08fd3bb05e7ecd9502f5bcbefe120be0147"} Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.351130 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7cl88" event={"ID":"c892059c-f661-4684-9a1e-19e0b0070d24","Type":"ContainerStarted","Data":"3c57d29f05fdbc21df14f47e8bb43950d8cf34cd5f0c6fd315e706f799604663"} Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.364951 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5cj94" podStartSLOduration=124.364924899 podStartE2EDuration="2m4.364924899s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:56.363913524 +0000 UTC m=+144.530215088" watchObservedRunningTime="2026-01-07 09:51:56.364924899 +0000 UTC m=+144.531226483" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.365568 5131 generic.go:358] "Generic (PLEG): container finished" podID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" containerID="59f2ad1057e3811da8fb0584fcd11b4475a96075b067fa96a84a3b3e092f47ca" exitCode=0 Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.368126 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq7z2" event={"ID":"5b21d118-e577-4bf0-a27c-f8fe3f05adc6","Type":"ContainerDied","Data":"59f2ad1057e3811da8fb0584fcd11b4475a96075b067fa96a84a3b3e092f47ca"} Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.368273 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq7z2" event={"ID":"5b21d118-e577-4bf0-a27c-f8fe3f05adc6","Type":"ContainerStarted","Data":"41fff9a89a231ed4eed25db1f13ec50779c0c1d0fb98945ab4d7601b67b492b0"} Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.387573 5131 generic.go:358] "Generic (PLEG): container finished" podID="f04172ba-2c1f-4d8f-b742-7d182136ca81" containerID="9df5116a0ef973141e408ef8795906b3c0e76c85c20aafd2db2d6b6be77d3cc3" exitCode=0 Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.388359 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8cf2" event={"ID":"f04172ba-2c1f-4d8f-b742-7d182136ca81","Type":"ContainerDied","Data":"9df5116a0ef973141e408ef8795906b3c0e76c85c20aafd2db2d6b6be77d3cc3"} Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.388398 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8cf2" event={"ID":"f04172ba-2c1f-4d8f-b742-7d182136ca81","Type":"ContainerStarted","Data":"7a86a6f94af62877e2287b221ea9790c68625c7acc3df28a5d96a835a46a6896"} Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.398701 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"fc2125a32ba245df9ef9b42ab6e69c48639cce49ef940c2210b61f4cdd4c8057"} Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.398895 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.422114 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"128967f29b4805714181f48945485c276feb9d4331e3bf50c81c7deb88bb4c8d"} Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.426550 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6f16686c-cacb-409a-a551-b29b54a60782","Type":"ContainerStarted","Data":"36f51449f915da4dee2d2ca265e495c8eee795813cfcc3fbd5ea05e7d99a146a"} Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.430292 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.430868 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:56.930828595 +0000 UTC m=+145.097130199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.434657 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-cnl99" Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.534352 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.536416 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.036399642 +0000 UTC m=+145.202701286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.587327 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbvmr"] Jan 07 09:51:56 crc kubenswrapper[5131]: W0107 09:51:56.607968 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a44502e_cd8c_4525_95f6_33c1eab86d42.slice/crio-a27245926c6a1e08507e3bda75330e68e6e1aef76cc3a09361cf0a01359b6ee9 WatchSource:0}: Error finding container a27245926c6a1e08507e3bda75330e68e6e1aef76cc3a09361cf0a01359b6ee9: Status 404 returned error can't find the container with id a27245926c6a1e08507e3bda75330e68e6e1aef76cc3a09361cf0a01359b6ee9 Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.637148 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.637298 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.13727105 +0000 UTC m=+145.303572614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.638214 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.638916 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.138897503 +0000 UTC m=+145.305199057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.740266 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.740882 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.240863349 +0000 UTC m=+145.407164923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.798372 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gww5"] Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.843383 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.843693 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.343680294 +0000 UTC m=+145.509981858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.944190 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.944558 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.4445162 +0000 UTC m=+145.610817774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:56 crc kubenswrapper[5131]: I0107 09:51:56.944795 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:56 crc kubenswrapper[5131]: E0107 09:51:56.945102 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.445086896 +0000 UTC m=+145.611388460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.046083 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.046810 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.546349021 +0000 UTC m=+145.712650595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.047262 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.047644 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.547633619 +0000 UTC m=+145.713935193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.148726 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.148892 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.648827841 +0000 UTC m=+145.815129405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.148965 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.149290 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.649276591 +0000 UTC m=+145.815578155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.166299 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.166975 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.174368 5131 patch_prober.go:28] interesting pod/console-64d44f6ddf-xvbzj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.174441 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-xvbzj" podUID="1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.250415 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.250615 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.750587528 +0000 UTC m=+145.916889092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.250864 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.251625 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.751610744 +0000 UTC m=+145.917912308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.352013 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.352213 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.852184238 +0000 UTC m=+146.018485802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.443375 5131 generic.go:358] "Generic (PLEG): container finished" podID="4a44502e-cd8c-4525-95f6-33c1eab86d42" containerID="7a38fc70b7401d7e94adba227ab6bc98f69b66e6adc85a42bf9f6e689c257a5a" exitCode=0 Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.443614 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbvmr" event={"ID":"4a44502e-cd8c-4525-95f6-33c1eab86d42","Type":"ContainerDied","Data":"7a38fc70b7401d7e94adba227ab6bc98f69b66e6adc85a42bf9f6e689c257a5a"} Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.443652 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbvmr" event={"ID":"4a44502e-cd8c-4525-95f6-33c1eab86d42","Type":"ContainerStarted","Data":"a27245926c6a1e08507e3bda75330e68e6e1aef76cc3a09361cf0a01359b6ee9"} Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.455335 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.456082 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:57.956066451 +0000 UTC m=+146.122368015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.480503 5131 generic.go:358] "Generic (PLEG): container finished" podID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" containerID="7668655bf8771597d010c576ae3c1d166afe37fbf39f59d506e0840b2a8a9fbd" exitCode=0 Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.480597 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gww5" event={"ID":"ef7d2b17-658d-4af6-b15d-5bdadcc4f021","Type":"ContainerDied","Data":"7668655bf8771597d010c576ae3c1d166afe37fbf39f59d506e0840b2a8a9fbd"} Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.480617 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gww5" event={"ID":"ef7d2b17-658d-4af6-b15d-5bdadcc4f021","Type":"ContainerStarted","Data":"ba3cedaea69b56b88b2cbf2af9fb9623150186efe957bac1943c967468850e47"} Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.510184 5131 generic.go:358] "Generic (PLEG): container finished" podID="6f16686c-cacb-409a-a551-b29b54a60782" containerID="ebf8425ea8001b419567edc8251ad019f8ae13b8b16d742d6a51fee8f23d650b" exitCode=0 Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.510336 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6f16686c-cacb-409a-a551-b29b54a60782","Type":"ContainerDied","Data":"ebf8425ea8001b419567edc8251ad019f8ae13b8b16d742d6a51fee8f23d650b"} Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.556639 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.557104 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.057049944 +0000 UTC m=+146.223351518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.558549 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.562072 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.062058547 +0000 UTC m=+146.228360111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.590122 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.661113 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.661230 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.161205458 +0000 UTC m=+146.327507022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.661426 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.661708 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.16170135 +0000 UTC m=+146.328002914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.683467 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.683604 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.685385 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.686549 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.763181 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.763335 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.263310941 +0000 UTC m=+146.429612505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.763421 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04c8b896-cb51-42c7-a684-3145e157ebec-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"04c8b896-cb51-42c7-a684-3145e157ebec\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.763471 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.763543 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c8b896-cb51-42c7-a684-3145e157ebec-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"04c8b896-cb51-42c7-a684-3145e157ebec\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.763815 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.263806933 +0000 UTC m=+146.430108497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.864729 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.864932 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.364909811 +0000 UTC m=+146.531211375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.865232 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c8b896-cb51-42c7-a684-3145e157ebec-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"04c8b896-cb51-42c7-a684-3145e157ebec\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.865382 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04c8b896-cb51-42c7-a684-3145e157ebec-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"04c8b896-cb51-42c7-a684-3145e157ebec\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.865414 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.865651 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04c8b896-cb51-42c7-a684-3145e157ebec-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"04c8b896-cb51-42c7-a684-3145e157ebec\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.865685 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.365677896 +0000 UTC m=+146.531979460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.894415 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c8b896-cb51-42c7-a684-3145e157ebec-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"04c8b896-cb51-42c7-a684-3145e157ebec\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.899441 5131 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.968353 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.968488 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.468469929 +0000 UTC m=+146.634771493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:57 crc kubenswrapper[5131]: I0107 09:51:57.968586 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:57 crc kubenswrapper[5131]: E0107 09:51:57.969115 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.469084477 +0000 UTC m=+146.635386041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.007250 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.076976 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:58 crc kubenswrapper[5131]: E0107 09:51:58.077159 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.577132594 +0000 UTC m=+146.743434158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.077358 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:58 crc kubenswrapper[5131]: E0107 09:51:58.078392 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.57837769 +0000 UTC m=+146.744679254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.178251 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:58 crc kubenswrapper[5131]: E0107 09:51:58.178417 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.67839938 +0000 UTC m=+146.844700944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.178472 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:58 crc kubenswrapper[5131]: E0107 09:51:58.178741 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.678734975 +0000 UTC m=+146.845036539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-bc9f4" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.279913 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:58 crc kubenswrapper[5131]: E0107 09:51:58.280684 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-07 09:51:58.7806651 +0000 UTC m=+146.946966664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.293651 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.312982 5131 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-07T09:51:57.899471386Z","UUID":"61531784-e3e8-4545-8914-09315a5aeefb","Handler":null,"Name":"","Endpoint":""} Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.317000 5131 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.317022 5131 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 07 09:51:58 crc kubenswrapper[5131]: W0107 09:51:58.330458 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod04c8b896_cb51_42c7_a684_3145e157ebec.slice/crio-97a7a8bb9d492f6456240c559ce67ac4ba23a25b85ff2145d20065efd0601e1e WatchSource:0}: Error finding container 97a7a8bb9d492f6456240c559ce67ac4ba23a25b85ff2145d20065efd0601e1e: Status 404 returned error can't find the container with id 97a7a8bb9d492f6456240c559ce67ac4ba23a25b85ff2145d20065efd0601e1e Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.382316 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.384953 5131 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.384986 5131 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.407075 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-bc9f4\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.483240 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.489808 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.553520 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"04c8b896-cb51-42c7-a684-3145e157ebec","Type":"ContainerStarted","Data":"97a7a8bb9d492f6456240c559ce67ac4ba23a25b85ff2145d20065efd0601e1e"} Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.572790 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7cl88" event={"ID":"c892059c-f661-4684-9a1e-19e0b0070d24","Type":"ContainerStarted","Data":"24c46655fcb4d4625109187b80793b53442f06b300b50e009385263a5250c2e0"} Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.572863 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7cl88" event={"ID":"c892059c-f661-4684-9a1e-19e0b0070d24","Type":"ContainerStarted","Data":"39f6125fa1eaf1f76d6311332f11975049416866e5c195db4417b9ef4d60c59c"} Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.584236 5131 generic.go:358] "Generic (PLEG): container finished" podID="6f5047a5-cbaa-4193-a89d-901db9b002d8" containerID="cd7eccaf5b40c91537741171e0c4006c1e54756bd174ea443f2e432571dd7f5f" exitCode=0 Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.584591 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" event={"ID":"6f5047a5-cbaa-4193-a89d-901db9b002d8","Type":"ContainerDied","Data":"cd7eccaf5b40c91537741171e0c4006c1e54756bd174ea443f2e432571dd7f5f"} Jan 07 09:51:58 crc kubenswrapper[5131]: I0107 09:51:58.599961 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.038048 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-bc9f4"] Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.151224 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.248732 5131 ???:1] "http: TLS handshake error from 192.168.126.11:50450: no serving certificate available for the kubelet" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.314073 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f16686c-cacb-409a-a551-b29b54a60782-kubelet-dir\") pod \"6f16686c-cacb-409a-a551-b29b54a60782\" (UID: \"6f16686c-cacb-409a-a551-b29b54a60782\") " Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.314233 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f16686c-cacb-409a-a551-b29b54a60782-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6f16686c-cacb-409a-a551-b29b54a60782" (UID: "6f16686c-cacb-409a-a551-b29b54a60782"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.314280 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f16686c-cacb-409a-a551-b29b54a60782-kube-api-access\") pod \"6f16686c-cacb-409a-a551-b29b54a60782\" (UID: \"6f16686c-cacb-409a-a551-b29b54a60782\") " Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.314882 5131 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f16686c-cacb-409a-a551-b29b54a60782-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.323139 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f16686c-cacb-409a-a551-b29b54a60782-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6f16686c-cacb-409a-a551-b29b54a60782" (UID: "6f16686c-cacb-409a-a551-b29b54a60782"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.415546 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f16686c-cacb-409a-a551-b29b54a60782-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.626276 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7cl88" event={"ID":"c892059c-f661-4684-9a1e-19e0b0070d24","Type":"ContainerStarted","Data":"8ec9f3448d65b1d9bea1303ce1ef594d19119c13428ef160029648462b47b100"} Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.629078 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" event={"ID":"9e92757e-cc25-48a6-a774-5c2a8a281576","Type":"ContainerStarted","Data":"3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71"} Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.629099 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" event={"ID":"9e92757e-cc25-48a6-a774-5c2a8a281576","Type":"ContainerStarted","Data":"9e25bd48d055a2e3323cd6e0ca4f9c51f50ef1827794f04f7cfa1498a001f2af"} Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.629200 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.630921 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"04c8b896-cb51-42c7-a684-3145e157ebec","Type":"ContainerStarted","Data":"b9f1111c368aa01063085d62171d89043736f77f554edb8d18b0a0dc934608ed"} Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.633925 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.634303 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6f16686c-cacb-409a-a551-b29b54a60782","Type":"ContainerDied","Data":"36f51449f915da4dee2d2ca265e495c8eee795813cfcc3fbd5ea05e7d99a146a"} Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.634351 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36f51449f915da4dee2d2ca265e495c8eee795813cfcc3fbd5ea05e7d99a146a" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.647908 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7cl88" podStartSLOduration=14.647891699 podStartE2EDuration="14.647891699s" podCreationTimestamp="2026-01-07 09:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:59.646252936 +0000 UTC m=+147.812554500" watchObservedRunningTime="2026-01-07 09:51:59.647891699 +0000 UTC m=+147.814193263" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.672473 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.672460267 podStartE2EDuration="2.672460267s" podCreationTimestamp="2026-01-07 09:51:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:59.669601919 +0000 UTC m=+147.835903483" watchObservedRunningTime="2026-01-07 09:51:59.672460267 +0000 UTC m=+147.838761831" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.776942 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.811713 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" podStartSLOduration=127.811700089 podStartE2EDuration="2m7.811700089s" podCreationTimestamp="2026-01-07 09:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:51:59.702850795 +0000 UTC m=+147.869152359" watchObservedRunningTime="2026-01-07 09:51:59.811700089 +0000 UTC m=+147.978001653" Jan 07 09:51:59 crc kubenswrapper[5131]: I0107 09:51:59.919015 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.029691 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f5047a5-cbaa-4193-a89d-901db9b002d8-config-volume\") pod \"6f5047a5-cbaa-4193-a89d-901db9b002d8\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.029754 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpfbz\" (UniqueName: \"kubernetes.io/projected/6f5047a5-cbaa-4193-a89d-901db9b002d8-kube-api-access-cpfbz\") pod \"6f5047a5-cbaa-4193-a89d-901db9b002d8\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.029804 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f5047a5-cbaa-4193-a89d-901db9b002d8-secret-volume\") pod \"6f5047a5-cbaa-4193-a89d-901db9b002d8\" (UID: \"6f5047a5-cbaa-4193-a89d-901db9b002d8\") " Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.030668 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f5047a5-cbaa-4193-a89d-901db9b002d8-config-volume" (OuterVolumeSpecName: "config-volume") pod "6f5047a5-cbaa-4193-a89d-901db9b002d8" (UID: "6f5047a5-cbaa-4193-a89d-901db9b002d8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.053229 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5047a5-cbaa-4193-a89d-901db9b002d8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6f5047a5-cbaa-4193-a89d-901db9b002d8" (UID: "6f5047a5-cbaa-4193-a89d-901db9b002d8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.054451 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5047a5-cbaa-4193-a89d-901db9b002d8-kube-api-access-cpfbz" (OuterVolumeSpecName: "kube-api-access-cpfbz") pod "6f5047a5-cbaa-4193-a89d-901db9b002d8" (UID: "6f5047a5-cbaa-4193-a89d-901db9b002d8"). InnerVolumeSpecName "kube-api-access-cpfbz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.131550 5131 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f5047a5-cbaa-4193-a89d-901db9b002d8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.131584 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cpfbz\" (UniqueName: \"kubernetes.io/projected/6f5047a5-cbaa-4193-a89d-901db9b002d8-kube-api-access-cpfbz\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.131601 5131 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f5047a5-cbaa-4193-a89d-901db9b002d8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.187815 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.665890 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.665922 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29462985-nd9tw" event={"ID":"6f5047a5-cbaa-4193-a89d-901db9b002d8","Type":"ContainerDied","Data":"12eebd4427da690fe58a9213eebeed81eccd914257620e0dd1be0df790454e2d"} Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.665973 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12eebd4427da690fe58a9213eebeed81eccd914257620e0dd1be0df790454e2d" Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.670151 5131 generic.go:358] "Generic (PLEG): container finished" podID="04c8b896-cb51-42c7-a684-3145e157ebec" containerID="b9f1111c368aa01063085d62171d89043736f77f554edb8d18b0a0dc934608ed" exitCode=0 Jan 07 09:52:00 crc kubenswrapper[5131]: I0107 09:52:00.671240 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"04c8b896-cb51-42c7-a684-3145e157ebec","Type":"ContainerDied","Data":"b9f1111c368aa01063085d62171d89043736f77f554edb8d18b0a0dc934608ed"} Jan 07 09:52:01 crc kubenswrapper[5131]: E0107 09:52:01.244197 5131 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 07 09:52:01 crc kubenswrapper[5131]: E0107 09:52:01.246433 5131 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 07 09:52:01 crc kubenswrapper[5131]: E0107 09:52:01.248103 5131 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 07 09:52:01 crc kubenswrapper[5131]: E0107 09:52:01.248145 5131 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" podUID="c61a2db1-fb94-4541-bc6a-57a2f0075072" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 07 09:52:03 crc kubenswrapper[5131]: I0107 09:52:03.272284 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-h4w78" Jan 07 09:52:04 crc kubenswrapper[5131]: I0107 09:52:04.261386 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-vdsqq" Jan 07 09:52:07 crc kubenswrapper[5131]: I0107 09:52:07.166878 5131 patch_prober.go:28] interesting pod/console-64d44f6ddf-xvbzj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 07 09:52:07 crc kubenswrapper[5131]: I0107 09:52:07.167394 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-xvbzj" podUID="1d8f71c1-e1fc-4770-ad03-7a1d4d244ce0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 07 09:52:07 crc kubenswrapper[5131]: I0107 09:52:07.727733 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"04c8b896-cb51-42c7-a684-3145e157ebec","Type":"ContainerDied","Data":"97a7a8bb9d492f6456240c559ce67ac4ba23a25b85ff2145d20065efd0601e1e"} Jan 07 09:52:07 crc kubenswrapper[5131]: I0107 09:52:07.727795 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97a7a8bb9d492f6456240c559ce67ac4ba23a25b85ff2145d20065efd0601e1e" Jan 07 09:52:07 crc kubenswrapper[5131]: I0107 09:52:07.749452 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 07 09:52:07 crc kubenswrapper[5131]: I0107 09:52:07.857780 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04c8b896-cb51-42c7-a684-3145e157ebec-kubelet-dir\") pod \"04c8b896-cb51-42c7-a684-3145e157ebec\" (UID: \"04c8b896-cb51-42c7-a684-3145e157ebec\") " Jan 07 09:52:07 crc kubenswrapper[5131]: I0107 09:52:07.857988 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c8b896-cb51-42c7-a684-3145e157ebec-kube-api-access\") pod \"04c8b896-cb51-42c7-a684-3145e157ebec\" (UID: \"04c8b896-cb51-42c7-a684-3145e157ebec\") " Jan 07 09:52:07 crc kubenswrapper[5131]: I0107 09:52:07.858132 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c8b896-cb51-42c7-a684-3145e157ebec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "04c8b896-cb51-42c7-a684-3145e157ebec" (UID: "04c8b896-cb51-42c7-a684-3145e157ebec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:52:07 crc kubenswrapper[5131]: I0107 09:52:07.858444 5131 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04c8b896-cb51-42c7-a684-3145e157ebec-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:07 crc kubenswrapper[5131]: I0107 09:52:07.863210 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c8b896-cb51-42c7-a684-3145e157ebec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "04c8b896-cb51-42c7-a684-3145e157ebec" (UID: "04c8b896-cb51-42c7-a684-3145e157ebec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:52:07 crc kubenswrapper[5131]: I0107 09:52:07.959403 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c8b896-cb51-42c7-a684-3145e157ebec-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:08 crc kubenswrapper[5131]: I0107 09:52:08.732417 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 07 09:52:09 crc kubenswrapper[5131]: I0107 09:52:09.520490 5131 ???:1] "http: TLS handshake error from 192.168.126.11:49908: no serving certificate available for the kubelet" Jan 07 09:52:11 crc kubenswrapper[5131]: E0107 09:52:11.245542 5131 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 07 09:52:11 crc kubenswrapper[5131]: E0107 09:52:11.247592 5131 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 07 09:52:11 crc kubenswrapper[5131]: E0107 09:52:11.249065 5131 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 07 09:52:11 crc kubenswrapper[5131]: E0107 09:52:11.249101 5131 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" podUID="c61a2db1-fb94-4541-bc6a-57a2f0075072" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.774726 5131 generic.go:358] "Generic (PLEG): container finished" podID="3c9c707f-f88b-4ba9-9722-51779966c49b" containerID="befb6720e3c65a4f1c7116d3681b76ab982c5383309f38b95431962d253a4d7d" exitCode=0 Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.774821 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fgb6" event={"ID":"3c9c707f-f88b-4ba9-9722-51779966c49b","Type":"ContainerDied","Data":"befb6720e3c65a4f1c7116d3681b76ab982c5383309f38b95431962d253a4d7d"} Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.778205 5131 generic.go:358] "Generic (PLEG): container finished" podID="17b0639f-c0b9-4140-af54-4da733719edb" containerID="2497c3f802b4302661be5d7104fbbdd07364f25b15b2c8f28bdd81703fef770b" exitCode=0 Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.778317 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv52j" event={"ID":"17b0639f-c0b9-4140-af54-4da733719edb","Type":"ContainerDied","Data":"2497c3f802b4302661be5d7104fbbdd07364f25b15b2c8f28bdd81703fef770b"} Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.780070 5131 generic.go:358] "Generic (PLEG): container finished" podID="b8cac87e-c013-4988-a977-5b1f038c1d34" containerID="d45e857e0b41ab6a86a0b5757498c0e5d38b6578dc1175c6829879a56ab43d39" exitCode=0 Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.780183 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9wkb" event={"ID":"b8cac87e-c013-4988-a977-5b1f038c1d34","Type":"ContainerDied","Data":"d45e857e0b41ab6a86a0b5757498c0e5d38b6578dc1175c6829879a56ab43d39"} Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.788013 5131 generic.go:358] "Generic (PLEG): container finished" podID="07c53e69-8037-4261-a288-5f4505e6f7e5" containerID="8392a1a78fda39698fcbcbfb30761721a4e5f331f86588c4082e97b6ba8c5083" exitCode=0 Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.788361 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db2q2" event={"ID":"07c53e69-8037-4261-a288-5f4505e6f7e5","Type":"ContainerDied","Data":"8392a1a78fda39698fcbcbfb30761721a4e5f331f86588c4082e97b6ba8c5083"} Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.791426 5131 generic.go:358] "Generic (PLEG): container finished" podID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" containerID="3224f9c43a644db4d26fb89a60eedd9fd072777f6834cb82d2a2491dc53d1d95" exitCode=0 Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.791539 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq7z2" event={"ID":"5b21d118-e577-4bf0-a27c-f8fe3f05adc6","Type":"ContainerDied","Data":"3224f9c43a644db4d26fb89a60eedd9fd072777f6834cb82d2a2491dc53d1d95"} Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.797927 5131 generic.go:358] "Generic (PLEG): container finished" podID="f04172ba-2c1f-4d8f-b742-7d182136ca81" containerID="04fb37854152d7df41852802494ff8288e68728d61c83b4be0b010264cfb19f0" exitCode=0 Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.798043 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8cf2" event={"ID":"f04172ba-2c1f-4d8f-b742-7d182136ca81","Type":"ContainerDied","Data":"04fb37854152d7df41852802494ff8288e68728d61c83b4be0b010264cfb19f0"} Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.799890 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbvmr" event={"ID":"4a44502e-cd8c-4525-95f6-33c1eab86d42","Type":"ContainerStarted","Data":"0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097"} Jan 07 09:52:13 crc kubenswrapper[5131]: I0107 09:52:13.804622 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gww5" event={"ID":"ef7d2b17-658d-4af6-b15d-5bdadcc4f021","Type":"ContainerStarted","Data":"857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35"} Jan 07 09:52:14 crc kubenswrapper[5131]: I0107 09:52:14.814653 5131 generic.go:358] "Generic (PLEG): container finished" podID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" containerID="857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35" exitCode=0 Jan 07 09:52:14 crc kubenswrapper[5131]: I0107 09:52:14.814856 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gww5" event={"ID":"ef7d2b17-658d-4af6-b15d-5bdadcc4f021","Type":"ContainerDied","Data":"857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35"} Jan 07 09:52:14 crc kubenswrapper[5131]: I0107 09:52:14.817481 5131 generic.go:358] "Generic (PLEG): container finished" podID="4a44502e-cd8c-4525-95f6-33c1eab86d42" containerID="0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097" exitCode=0 Jan 07 09:52:14 crc kubenswrapper[5131]: I0107 09:52:14.817753 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbvmr" event={"ID":"4a44502e-cd8c-4525-95f6-33c1eab86d42","Type":"ContainerDied","Data":"0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097"} Jan 07 09:52:15 crc kubenswrapper[5131]: I0107 09:52:15.830787 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv52j" event={"ID":"17b0639f-c0b9-4140-af54-4da733719edb","Type":"ContainerStarted","Data":"33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425"} Jan 07 09:52:16 crc kubenswrapper[5131]: I0107 09:52:16.849347 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fgb6" event={"ID":"3c9c707f-f88b-4ba9-9722-51779966c49b","Type":"ContainerStarted","Data":"46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd"} Jan 07 09:52:16 crc kubenswrapper[5131]: I0107 09:52:16.856219 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9wkb" event={"ID":"b8cac87e-c013-4988-a977-5b1f038c1d34","Type":"ContainerStarted","Data":"eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768"} Jan 07 09:52:16 crc kubenswrapper[5131]: I0107 09:52:16.930810 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rv52j" podStartSLOduration=7.405572518 podStartE2EDuration="24.930793787s" podCreationTimestamp="2026-01-07 09:51:52 +0000 UTC" firstStartedPulling="2026-01-07 09:51:55.346581711 +0000 UTC m=+143.512883275" lastFinishedPulling="2026-01-07 09:52:12.87180298 +0000 UTC m=+161.038104544" observedRunningTime="2026-01-07 09:52:16.929463117 +0000 UTC m=+165.095764691" watchObservedRunningTime="2026-01-07 09:52:16.930793787 +0000 UTC m=+165.097095351" Jan 07 09:52:16 crc kubenswrapper[5131]: I0107 09:52:16.951805 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l9wkb" podStartSLOduration=6.290028568 podStartE2EDuration="24.951789355s" podCreationTimestamp="2026-01-07 09:51:52 +0000 UTC" firstStartedPulling="2026-01-07 09:51:54.213932626 +0000 UTC m=+142.380234190" lastFinishedPulling="2026-01-07 09:52:12.875693393 +0000 UTC m=+161.041994977" observedRunningTime="2026-01-07 09:52:16.948200975 +0000 UTC m=+165.114502549" watchObservedRunningTime="2026-01-07 09:52:16.951789355 +0000 UTC m=+165.118090919" Jan 07 09:52:17 crc kubenswrapper[5131]: I0107 09:52:17.174757 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:52:17 crc kubenswrapper[5131]: I0107 09:52:17.862049 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db2q2" event={"ID":"07c53e69-8037-4261-a288-5f4505e6f7e5","Type":"ContainerStarted","Data":"be639cc6e215a1653e1882ac31c810cdc436d6d4900641afe82fc02c9e461a7a"} Jan 07 09:52:18 crc kubenswrapper[5131]: I0107 09:52:18.368973 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-xvbzj" Jan 07 09:52:18 crc kubenswrapper[5131]: I0107 09:52:18.397063 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5fgb6" podStartSLOduration=7.793686711 podStartE2EDuration="25.397041601s" podCreationTimestamp="2026-01-07 09:51:53 +0000 UTC" firstStartedPulling="2026-01-07 09:51:55.301176582 +0000 UTC m=+143.467478146" lastFinishedPulling="2026-01-07 09:52:12.904531442 +0000 UTC m=+161.070833036" observedRunningTime="2026-01-07 09:52:18.396245295 +0000 UTC m=+166.562546889" watchObservedRunningTime="2026-01-07 09:52:18.397041601 +0000 UTC m=+166.563343195" Jan 07 09:52:18 crc kubenswrapper[5131]: I0107 09:52:18.869623 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq7z2" event={"ID":"5b21d118-e577-4bf0-a27c-f8fe3f05adc6","Type":"ContainerStarted","Data":"422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0"} Jan 07 09:52:18 crc kubenswrapper[5131]: I0107 09:52:18.871512 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8cf2" event={"ID":"f04172ba-2c1f-4d8f-b742-7d182136ca81","Type":"ContainerStarted","Data":"774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62"} Jan 07 09:52:19 crc kubenswrapper[5131]: I0107 09:52:19.591711 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-db2q2" podStartSLOduration=8.941208535 podStartE2EDuration="27.591684527s" podCreationTimestamp="2026-01-07 09:51:52 +0000 UTC" firstStartedPulling="2026-01-07 09:51:54.220896567 +0000 UTC m=+142.387198132" lastFinishedPulling="2026-01-07 09:52:12.87137256 +0000 UTC m=+161.037674124" observedRunningTime="2026-01-07 09:52:19.589675407 +0000 UTC m=+167.755976981" watchObservedRunningTime="2026-01-07 09:52:19.591684527 +0000 UTC m=+167.757986091" Jan 07 09:52:19 crc kubenswrapper[5131]: I0107 09:52:19.878106 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gww5" event={"ID":"ef7d2b17-658d-4af6-b15d-5bdadcc4f021","Type":"ContainerStarted","Data":"2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f"} Jan 07 09:52:19 crc kubenswrapper[5131]: I0107 09:52:19.894694 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6gww5" podStartSLOduration=9.504639447 podStartE2EDuration="24.894677367s" podCreationTimestamp="2026-01-07 09:51:55 +0000 UTC" firstStartedPulling="2026-01-07 09:51:57.481355061 +0000 UTC m=+145.647656625" lastFinishedPulling="2026-01-07 09:52:12.871392951 +0000 UTC m=+161.037694545" observedRunningTime="2026-01-07 09:52:19.893791707 +0000 UTC m=+168.060093271" watchObservedRunningTime="2026-01-07 09:52:19.894677367 +0000 UTC m=+168.060978931" Jan 07 09:52:19 crc kubenswrapper[5131]: I0107 09:52:19.911960 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rq7z2" podStartSLOduration=8.410302422 podStartE2EDuration="24.911939918s" podCreationTimestamp="2026-01-07 09:51:55 +0000 UTC" firstStartedPulling="2026-01-07 09:51:56.369020132 +0000 UTC m=+144.535321696" lastFinishedPulling="2026-01-07 09:52:12.870657588 +0000 UTC m=+161.036959192" observedRunningTime="2026-01-07 09:52:19.90928996 +0000 UTC m=+168.075591534" watchObservedRunningTime="2026-01-07 09:52:19.911939918 +0000 UTC m=+168.078241492" Jan 07 09:52:19 crc kubenswrapper[5131]: I0107 09:52:19.930126 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t8cf2" podStartSLOduration=9.495366254 podStartE2EDuration="25.93010487s" podCreationTimestamp="2026-01-07 09:51:54 +0000 UTC" firstStartedPulling="2026-01-07 09:51:56.389932387 +0000 UTC m=+144.556233961" lastFinishedPulling="2026-01-07 09:52:12.824670983 +0000 UTC m=+160.990972577" observedRunningTime="2026-01-07 09:52:19.926565532 +0000 UTC m=+168.092867106" watchObservedRunningTime="2026-01-07 09:52:19.93010487 +0000 UTC m=+168.096406434" Jan 07 09:52:20 crc kubenswrapper[5131]: I0107 09:52:20.676041 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:52:20 crc kubenswrapper[5131]: I0107 09:52:20.884283 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbvmr" event={"ID":"4a44502e-cd8c-4525-95f6-33c1eab86d42","Type":"ContainerStarted","Data":"3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2"} Jan 07 09:52:20 crc kubenswrapper[5131]: I0107 09:52:20.907850 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cbvmr" podStartSLOduration=10.377286783 podStartE2EDuration="25.907811502s" podCreationTimestamp="2026-01-07 09:51:55 +0000 UTC" firstStartedPulling="2026-01-07 09:51:57.445041058 +0000 UTC m=+145.611342632" lastFinishedPulling="2026-01-07 09:52:12.975565787 +0000 UTC m=+161.141867351" observedRunningTime="2026-01-07 09:52:20.904887452 +0000 UTC m=+169.071189026" watchObservedRunningTime="2026-01-07 09:52:20.907811502 +0000 UTC m=+169.074113066" Jan 07 09:52:21 crc kubenswrapper[5131]: E0107 09:52:21.247965 5131 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 07 09:52:21 crc kubenswrapper[5131]: E0107 09:52:21.251335 5131 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 07 09:52:21 crc kubenswrapper[5131]: E0107 09:52:21.253363 5131 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 07 09:52:21 crc kubenswrapper[5131]: E0107 09:52:21.253410 5131 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" podUID="c61a2db1-fb94-4541-bc6a-57a2f0075072" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 07 09:52:22 crc kubenswrapper[5131]: I0107 09:52:22.966399 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:52:22 crc kubenswrapper[5131]: I0107 09:52:22.966494 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:52:23 crc kubenswrapper[5131]: I0107 09:52:23.100203 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:52:23 crc kubenswrapper[5131]: I0107 09:52:23.100253 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:52:23 crc kubenswrapper[5131]: I0107 09:52:23.412932 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:52:23 crc kubenswrapper[5131]: I0107 09:52:23.413070 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:52:23 crc kubenswrapper[5131]: I0107 09:52:23.565481 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:52:23 crc kubenswrapper[5131]: I0107 09:52:23.565809 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:52:25 crc kubenswrapper[5131]: I0107 09:52:25.141647 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:52:25 crc kubenswrapper[5131]: I0107 09:52:25.142250 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:52:25 crc kubenswrapper[5131]: I0107 09:52:25.344140 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2r6qr" Jan 07 09:52:25 crc kubenswrapper[5131]: I0107 09:52:25.553681 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:52:25 crc kubenswrapper[5131]: I0107 09:52:25.553747 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:52:26 crc kubenswrapper[5131]: I0107 09:52:26.142979 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:52:26 crc kubenswrapper[5131]: I0107 09:52:26.143039 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:52:26 crc kubenswrapper[5131]: I0107 09:52:26.330447 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:52:26 crc kubenswrapper[5131]: I0107 09:52:26.330697 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.514865 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.515061 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.516758 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.517464 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.519606 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.520271 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.521427 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.521572 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.521693 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.571060 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.573881 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.581044 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.581849 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.587532 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.591959 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.602693 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.929717 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-grvm4_c61a2db1-fb94-4541-bc6a-57a2f0075072/kube-multus-additional-cni-plugins/0.log" Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.930018 5131 generic.go:358] "Generic (PLEG): container finished" podID="c61a2db1-fb94-4541-bc6a-57a2f0075072" containerID="020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" exitCode=137 Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.930119 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" event={"ID":"c61a2db1-fb94-4541-bc6a-57a2f0075072","Type":"ContainerDied","Data":"020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54"} Jan 07 09:52:27 crc kubenswrapper[5131]: I0107 09:52:27.961285 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.142564 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-grvm4_c61a2db1-fb94-4541-bc6a-57a2f0075072/kube-multus-additional-cni-plugins/0.log" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.142684 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.250324 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c61a2db1-fb94-4541-bc6a-57a2f0075072-ready\") pod \"c61a2db1-fb94-4541-bc6a-57a2f0075072\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.250372 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c61a2db1-fb94-4541-bc6a-57a2f0075072-tuning-conf-dir\") pod \"c61a2db1-fb94-4541-bc6a-57a2f0075072\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.250442 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c61a2db1-fb94-4541-bc6a-57a2f0075072-cni-sysctl-allowlist\") pod \"c61a2db1-fb94-4541-bc6a-57a2f0075072\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.250631 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldrw8\" (UniqueName: \"kubernetes.io/projected/c61a2db1-fb94-4541-bc6a-57a2f0075072-kube-api-access-ldrw8\") pod \"c61a2db1-fb94-4541-bc6a-57a2f0075072\" (UID: \"c61a2db1-fb94-4541-bc6a-57a2f0075072\") " Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.251500 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c61a2db1-fb94-4541-bc6a-57a2f0075072-ready" (OuterVolumeSpecName: "ready") pod "c61a2db1-fb94-4541-bc6a-57a2f0075072" (UID: "c61a2db1-fb94-4541-bc6a-57a2f0075072"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.251608 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c61a2db1-fb94-4541-bc6a-57a2f0075072-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "c61a2db1-fb94-4541-bc6a-57a2f0075072" (UID: "c61a2db1-fb94-4541-bc6a-57a2f0075072"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.252065 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c61a2db1-fb94-4541-bc6a-57a2f0075072-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "c61a2db1-fb94-4541-bc6a-57a2f0075072" (UID: "c61a2db1-fb94-4541-bc6a-57a2f0075072"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.259629 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c61a2db1-fb94-4541-bc6a-57a2f0075072-kube-api-access-ldrw8" (OuterVolumeSpecName: "kube-api-access-ldrw8") pod "c61a2db1-fb94-4541-bc6a-57a2f0075072" (UID: "c61a2db1-fb94-4541-bc6a-57a2f0075072"). InnerVolumeSpecName "kube-api-access-ldrw8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.352070 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ldrw8\" (UniqueName: \"kubernetes.io/projected/c61a2db1-fb94-4541-bc6a-57a2f0075072-kube-api-access-ldrw8\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.352739 5131 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c61a2db1-fb94-4541-bc6a-57a2f0075072-ready\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.352857 5131 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c61a2db1-fb94-4541-bc6a-57a2f0075072-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.352926 5131 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c61a2db1-fb94-4541-bc6a-57a2f0075072-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.936523 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-grvm4_c61a2db1-fb94-4541-bc6a-57a2f0075072/kube-multus-additional-cni-plugins/0.log" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.936989 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.937290 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-grvm4" event={"ID":"c61a2db1-fb94-4541-bc6a-57a2f0075072","Type":"ContainerDied","Data":"96372a87f1a7a128c87f7a00e9497f17ed0a4d11f5a3b694206366a24622f9d0"} Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.937330 5131 scope.go:117] "RemoveContainer" containerID="020bec0b8df66d061898080a4918b13e7b30e9a5fbe18d9973f6f4e9e6964d54" Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.973896 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-grvm4"] Jan 07 09:52:28 crc kubenswrapper[5131]: I0107 09:52:28.976508 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-grvm4"] Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.004348 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fgb6"] Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.004621 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5fgb6" podUID="3c9c707f-f88b-4ba9-9722-51779966c49b" containerName="registry-server" containerID="cri-o://46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd" gracePeriod=2 Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.025552 5131 ???:1] "http: TLS handshake error from 192.168.126.11:40566: no serving certificate available for the kubelet" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.193540 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c61a2db1-fb94-4541-bc6a-57a2f0075072" path="/var/lib/kubelet/pods/c61a2db1-fb94-4541-bc6a-57a2f0075072/volumes" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.208409 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rq7z2"] Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.208629 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rq7z2" podUID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" containerName="registry-server" containerID="cri-o://422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0" gracePeriod=2 Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.354858 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.388572 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-catalog-content\") pod \"3c9c707f-f88b-4ba9-9722-51779966c49b\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.388624 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-utilities\") pod \"3c9c707f-f88b-4ba9-9722-51779966c49b\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.388692 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2frr\" (UniqueName: \"kubernetes.io/projected/3c9c707f-f88b-4ba9-9722-51779966c49b-kube-api-access-j2frr\") pod \"3c9c707f-f88b-4ba9-9722-51779966c49b\" (UID: \"3c9c707f-f88b-4ba9-9722-51779966c49b\") " Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.390993 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-utilities" (OuterVolumeSpecName: "utilities") pod "3c9c707f-f88b-4ba9-9722-51779966c49b" (UID: "3c9c707f-f88b-4ba9-9722-51779966c49b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.396055 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c9c707f-f88b-4ba9-9722-51779966c49b-kube-api-access-j2frr" (OuterVolumeSpecName: "kube-api-access-j2frr") pod "3c9c707f-f88b-4ba9-9722-51779966c49b" (UID: "3c9c707f-f88b-4ba9-9722-51779966c49b"). InnerVolumeSpecName "kube-api-access-j2frr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.418764 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c9c707f-f88b-4ba9-9722-51779966c49b" (UID: "3c9c707f-f88b-4ba9-9722-51779966c49b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.489722 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.489759 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9c707f-f88b-4ba9-9722-51779966c49b-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.489770 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j2frr\" (UniqueName: \"kubernetes.io/projected/3c9c707f-f88b-4ba9-9722-51779966c49b-kube-api-access-j2frr\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.539261 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.590672 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-catalog-content\") pod \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.590721 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-utilities\") pod \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.590886 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp8lr\" (UniqueName: \"kubernetes.io/projected/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-kube-api-access-vp8lr\") pod \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\" (UID: \"5b21d118-e577-4bf0-a27c-f8fe3f05adc6\") " Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.592647 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-utilities" (OuterVolumeSpecName: "utilities") pod "5b21d118-e577-4bf0-a27c-f8fe3f05adc6" (UID: "5b21d118-e577-4bf0-a27c-f8fe3f05adc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.594264 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-kube-api-access-vp8lr" (OuterVolumeSpecName: "kube-api-access-vp8lr") pod "5b21d118-e577-4bf0-a27c-f8fe3f05adc6" (UID: "5b21d118-e577-4bf0-a27c-f8fe3f05adc6"). InnerVolumeSpecName "kube-api-access-vp8lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.602995 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b21d118-e577-4bf0-a27c-f8fe3f05adc6" (UID: "5b21d118-e577-4bf0-a27c-f8fe3f05adc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.692159 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vp8lr\" (UniqueName: \"kubernetes.io/projected/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-kube-api-access-vp8lr\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.692205 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.692218 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b21d118-e577-4bf0-a27c-f8fe3f05adc6-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.950139 5131 generic.go:358] "Generic (PLEG): container finished" podID="3c9c707f-f88b-4ba9-9722-51779966c49b" containerID="46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd" exitCode=0 Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.950174 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fgb6" event={"ID":"3c9c707f-f88b-4ba9-9722-51779966c49b","Type":"ContainerDied","Data":"46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd"} Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.950228 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fgb6" event={"ID":"3c9c707f-f88b-4ba9-9722-51779966c49b","Type":"ContainerDied","Data":"406ab0cd1edbec233068e5e73ea50c9ff28765a5a0e3f5b694c2d394df3a2a8d"} Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.950251 5131 scope.go:117] "RemoveContainer" containerID="46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.950375 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fgb6" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.954592 5131 generic.go:358] "Generic (PLEG): container finished" podID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" containerID="422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0" exitCode=0 Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.954710 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rq7z2" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.954790 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq7z2" event={"ID":"5b21d118-e577-4bf0-a27c-f8fe3f05adc6","Type":"ContainerDied","Data":"422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0"} Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.954874 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rq7z2" event={"ID":"5b21d118-e577-4bf0-a27c-f8fe3f05adc6","Type":"ContainerDied","Data":"41fff9a89a231ed4eed25db1f13ec50779c0c1d0fb98945ab4d7601b67b492b0"} Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.966762 5131 scope.go:117] "RemoveContainer" containerID="befb6720e3c65a4f1c7116d3681b76ab982c5383309f38b95431962d253a4d7d" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.985234 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rq7z2"] Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.985290 5131 scope.go:117] "RemoveContainer" containerID="8d8743f8e106258443b19b6b54860ddc9161e16ccad91cd2b4093fca46185564" Jan 07 09:52:30 crc kubenswrapper[5131]: I0107 09:52:30.999028 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rq7z2"] Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.005177 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fgb6"] Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.010872 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5fgb6"] Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.011892 5131 scope.go:117] "RemoveContainer" containerID="46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd" Jan 07 09:52:31 crc kubenswrapper[5131]: E0107 09:52:31.013039 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd\": container with ID starting with 46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd not found: ID does not exist" containerID="46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.013243 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd"} err="failed to get container status \"46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd\": rpc error: code = NotFound desc = could not find container \"46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd\": container with ID starting with 46235cc6f1b39e190e46fa949c73d83c0d49d0d2ce065446114205d70d304dfd not found: ID does not exist" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.013409 5131 scope.go:117] "RemoveContainer" containerID="befb6720e3c65a4f1c7116d3681b76ab982c5383309f38b95431962d253a4d7d" Jan 07 09:52:31 crc kubenswrapper[5131]: E0107 09:52:31.014238 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"befb6720e3c65a4f1c7116d3681b76ab982c5383309f38b95431962d253a4d7d\": container with ID starting with befb6720e3c65a4f1c7116d3681b76ab982c5383309f38b95431962d253a4d7d not found: ID does not exist" containerID="befb6720e3c65a4f1c7116d3681b76ab982c5383309f38b95431962d253a4d7d" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.014293 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"befb6720e3c65a4f1c7116d3681b76ab982c5383309f38b95431962d253a4d7d"} err="failed to get container status \"befb6720e3c65a4f1c7116d3681b76ab982c5383309f38b95431962d253a4d7d\": rpc error: code = NotFound desc = could not find container \"befb6720e3c65a4f1c7116d3681b76ab982c5383309f38b95431962d253a4d7d\": container with ID starting with befb6720e3c65a4f1c7116d3681b76ab982c5383309f38b95431962d253a4d7d not found: ID does not exist" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.014311 5131 scope.go:117] "RemoveContainer" containerID="8d8743f8e106258443b19b6b54860ddc9161e16ccad91cd2b4093fca46185564" Jan 07 09:52:31 crc kubenswrapper[5131]: E0107 09:52:31.014891 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d8743f8e106258443b19b6b54860ddc9161e16ccad91cd2b4093fca46185564\": container with ID starting with 8d8743f8e106258443b19b6b54860ddc9161e16ccad91cd2b4093fca46185564 not found: ID does not exist" containerID="8d8743f8e106258443b19b6b54860ddc9161e16ccad91cd2b4093fca46185564" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.014958 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8743f8e106258443b19b6b54860ddc9161e16ccad91cd2b4093fca46185564"} err="failed to get container status \"8d8743f8e106258443b19b6b54860ddc9161e16ccad91cd2b4093fca46185564\": rpc error: code = NotFound desc = could not find container \"8d8743f8e106258443b19b6b54860ddc9161e16ccad91cd2b4093fca46185564\": container with ID starting with 8d8743f8e106258443b19b6b54860ddc9161e16ccad91cd2b4093fca46185564 not found: ID does not exist" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.015041 5131 scope.go:117] "RemoveContainer" containerID="422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.027567 5131 scope.go:117] "RemoveContainer" containerID="3224f9c43a644db4d26fb89a60eedd9fd072777f6834cb82d2a2491dc53d1d95" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.044077 5131 scope.go:117] "RemoveContainer" containerID="59f2ad1057e3811da8fb0584fcd11b4475a96075b067fa96a84a3b3e092f47ca" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.056678 5131 scope.go:117] "RemoveContainer" containerID="422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0" Jan 07 09:52:31 crc kubenswrapper[5131]: E0107 09:52:31.057011 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0\": container with ID starting with 422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0 not found: ID does not exist" containerID="422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.057043 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0"} err="failed to get container status \"422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0\": rpc error: code = NotFound desc = could not find container \"422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0\": container with ID starting with 422b6128321ad6a7f1c72401270889406323bd509c077e818856bb05c65f76a0 not found: ID does not exist" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.057063 5131 scope.go:117] "RemoveContainer" containerID="3224f9c43a644db4d26fb89a60eedd9fd072777f6834cb82d2a2491dc53d1d95" Jan 07 09:52:31 crc kubenswrapper[5131]: E0107 09:52:31.057579 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3224f9c43a644db4d26fb89a60eedd9fd072777f6834cb82d2a2491dc53d1d95\": container with ID starting with 3224f9c43a644db4d26fb89a60eedd9fd072777f6834cb82d2a2491dc53d1d95 not found: ID does not exist" containerID="3224f9c43a644db4d26fb89a60eedd9fd072777f6834cb82d2a2491dc53d1d95" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.057715 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3224f9c43a644db4d26fb89a60eedd9fd072777f6834cb82d2a2491dc53d1d95"} err="failed to get container status \"3224f9c43a644db4d26fb89a60eedd9fd072777f6834cb82d2a2491dc53d1d95\": rpc error: code = NotFound desc = could not find container \"3224f9c43a644db4d26fb89a60eedd9fd072777f6834cb82d2a2491dc53d1d95\": container with ID starting with 3224f9c43a644db4d26fb89a60eedd9fd072777f6834cb82d2a2491dc53d1d95 not found: ID does not exist" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.057871 5131 scope.go:117] "RemoveContainer" containerID="59f2ad1057e3811da8fb0584fcd11b4475a96075b067fa96a84a3b3e092f47ca" Jan 07 09:52:31 crc kubenswrapper[5131]: E0107 09:52:31.058241 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59f2ad1057e3811da8fb0584fcd11b4475a96075b067fa96a84a3b3e092f47ca\": container with ID starting with 59f2ad1057e3811da8fb0584fcd11b4475a96075b067fa96a84a3b3e092f47ca not found: ID does not exist" containerID="59f2ad1057e3811da8fb0584fcd11b4475a96075b067fa96a84a3b3e092f47ca" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.058275 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59f2ad1057e3811da8fb0584fcd11b4475a96075b067fa96a84a3b3e092f47ca"} err="failed to get container status \"59f2ad1057e3811da8fb0584fcd11b4475a96075b067fa96a84a3b3e092f47ca\": rpc error: code = NotFound desc = could not find container \"59f2ad1057e3811da8fb0584fcd11b4475a96075b067fa96a84a3b3e092f47ca\": container with ID starting with 59f2ad1057e3811da8fb0584fcd11b4475a96075b067fa96a84a3b3e092f47ca not found: ID does not exist" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.796959 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.798797 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6f5047a5-cbaa-4193-a89d-901db9b002d8" containerName="collect-profiles" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.798887 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5047a5-cbaa-4193-a89d-901db9b002d8" containerName="collect-profiles" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.798920 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c9c707f-f88b-4ba9-9722-51779966c49b" containerName="extract-utilities" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.798935 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9c707f-f88b-4ba9-9722-51779966c49b" containerName="extract-utilities" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.798963 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" containerName="extract-content" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.798976 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" containerName="extract-content" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799028 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6f16686c-cacb-409a-a551-b29b54a60782" containerName="pruner" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799044 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f16686c-cacb-409a-a551-b29b54a60782" containerName="pruner" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799061 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" containerName="registry-server" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799076 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" containerName="registry-server" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799102 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c61a2db1-fb94-4541-bc6a-57a2f0075072" containerName="kube-multus-additional-cni-plugins" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799119 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61a2db1-fb94-4541-bc6a-57a2f0075072" containerName="kube-multus-additional-cni-plugins" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799151 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04c8b896-cb51-42c7-a684-3145e157ebec" containerName="pruner" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799165 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c8b896-cb51-42c7-a684-3145e157ebec" containerName="pruner" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799188 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" containerName="extract-utilities" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799205 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" containerName="extract-utilities" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799224 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c9c707f-f88b-4ba9-9722-51779966c49b" containerName="extract-content" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799239 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9c707f-f88b-4ba9-9722-51779966c49b" containerName="extract-content" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799262 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c9c707f-f88b-4ba9-9722-51779966c49b" containerName="registry-server" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799276 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9c707f-f88b-4ba9-9722-51779966c49b" containerName="registry-server" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799488 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="c61a2db1-fb94-4541-bc6a-57a2f0075072" containerName="kube-multus-additional-cni-plugins" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799517 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="04c8b896-cb51-42c7-a684-3145e157ebec" containerName="pruner" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799539 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="6f5047a5-cbaa-4193-a89d-901db9b002d8" containerName="collect-profiles" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799563 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="6f16686c-cacb-409a-a551-b29b54a60782" containerName="pruner" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799588 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" containerName="registry-server" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.799608 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="3c9c707f-f88b-4ba9-9722-51779966c49b" containerName="registry-server" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.804712 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.806636 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.812244 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.812358 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.911007 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5ba9735b-5cfb-4fbe-89dc-ff93a61da881\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 07 09:52:31 crc kubenswrapper[5131]: I0107 09:52:31.911067 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5ba9735b-5cfb-4fbe-89dc-ff93a61da881\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.012471 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5ba9735b-5cfb-4fbe-89dc-ff93a61da881\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.012553 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5ba9735b-5cfb-4fbe-89dc-ff93a61da881\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.012693 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5ba9735b-5cfb-4fbe-89dc-ff93a61da881\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.048791 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5ba9735b-5cfb-4fbe-89dc-ff93a61da881\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.137849 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.146075 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.192450 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c9c707f-f88b-4ba9-9722-51779966c49b" path="/var/lib/kubelet/pods/3c9c707f-f88b-4ba9-9722-51779966c49b/volumes" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.194174 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b21d118-e577-4bf0-a27c-f8fe3f05adc6" path="/var/lib/kubelet/pods/5b21d118-e577-4bf0-a27c-f8fe3f05adc6/volumes" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.402755 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rv52j"] Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.403273 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rv52j" podUID="17b0639f-c0b9-4140-af54-4da733719edb" containerName="registry-server" containerID="cri-o://33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425" gracePeriod=2 Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.429072 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.606234 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6gww5"] Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.606761 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6gww5" podUID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" containerName="registry-server" containerID="cri-o://2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f" gracePeriod=2 Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.750573 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.824749 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhwk8\" (UniqueName: \"kubernetes.io/projected/17b0639f-c0b9-4140-af54-4da733719edb-kube-api-access-rhwk8\") pod \"17b0639f-c0b9-4140-af54-4da733719edb\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.824810 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-utilities\") pod \"17b0639f-c0b9-4140-af54-4da733719edb\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.824984 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-catalog-content\") pod \"17b0639f-c0b9-4140-af54-4da733719edb\" (UID: \"17b0639f-c0b9-4140-af54-4da733719edb\") " Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.826514 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-utilities" (OuterVolumeSpecName: "utilities") pod "17b0639f-c0b9-4140-af54-4da733719edb" (UID: "17b0639f-c0b9-4140-af54-4da733719edb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.831477 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b0639f-c0b9-4140-af54-4da733719edb-kube-api-access-rhwk8" (OuterVolumeSpecName: "kube-api-access-rhwk8") pod "17b0639f-c0b9-4140-af54-4da733719edb" (UID: "17b0639f-c0b9-4140-af54-4da733719edb"). InnerVolumeSpecName "kube-api-access-rhwk8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.881092 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17b0639f-c0b9-4140-af54-4da733719edb" (UID: "17b0639f-c0b9-4140-af54-4da733719edb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.926436 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.926470 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rhwk8\" (UniqueName: \"kubernetes.io/projected/17b0639f-c0b9-4140-af54-4da733719edb-kube-api-access-rhwk8\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.926485 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b0639f-c0b9-4140-af54-4da733719edb-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.929186 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.974264 5131 generic.go:358] "Generic (PLEG): container finished" podID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" containerID="2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f" exitCode=0 Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.974404 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gww5" event={"ID":"ef7d2b17-658d-4af6-b15d-5bdadcc4f021","Type":"ContainerDied","Data":"2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f"} Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.974445 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gww5" event={"ID":"ef7d2b17-658d-4af6-b15d-5bdadcc4f021","Type":"ContainerDied","Data":"ba3cedaea69b56b88b2cbf2af9fb9623150186efe957bac1943c967468850e47"} Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.974465 5131 scope.go:117] "RemoveContainer" containerID="2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.974803 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gww5" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.976588 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5ba9735b-5cfb-4fbe-89dc-ff93a61da881","Type":"ContainerStarted","Data":"d15e4503ed01a822bfabaada96f5ea7510dc88d73ffee9ee9d2b12f966a3682a"} Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.976631 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5ba9735b-5cfb-4fbe-89dc-ff93a61da881","Type":"ContainerStarted","Data":"752ec58dbead39409a66c7aa92ca8d31007952c839e6b18e97ec963d1eabac36"} Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.980790 5131 generic.go:358] "Generic (PLEG): container finished" podID="17b0639f-c0b9-4140-af54-4da733719edb" containerID="33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425" exitCode=0 Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.980867 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv52j" event={"ID":"17b0639f-c0b9-4140-af54-4da733719edb","Type":"ContainerDied","Data":"33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425"} Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.980882 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv52j" event={"ID":"17b0639f-c0b9-4140-af54-4da733719edb","Type":"ContainerDied","Data":"9aefa76d24d07e32678ae078b945929126e192e4706aa2efa662ce57574dfa90"} Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.980932 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv52j" Jan 07 09:52:32 crc kubenswrapper[5131]: I0107 09:52:32.993603 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=1.993592788 podStartE2EDuration="1.993592788s" podCreationTimestamp="2026-01-07 09:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:52:32.991134135 +0000 UTC m=+181.157435699" watchObservedRunningTime="2026-01-07 09:52:32.993592788 +0000 UTC m=+181.159894352" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.000131 5131 scope.go:117] "RemoveContainer" containerID="857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.012795 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rv52j"] Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.016969 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rv52j"] Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.027743 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-catalog-content\") pod \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.027821 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x848k\" (UniqueName: \"kubernetes.io/projected/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-kube-api-access-x848k\") pod \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.027965 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-utilities\") pod \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\" (UID: \"ef7d2b17-658d-4af6-b15d-5bdadcc4f021\") " Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.036698 5131 scope.go:117] "RemoveContainer" containerID="7668655bf8771597d010c576ae3c1d166afe37fbf39f59d506e0840b2a8a9fbd" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.036852 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-utilities" (OuterVolumeSpecName: "utilities") pod "ef7d2b17-658d-4af6-b15d-5bdadcc4f021" (UID: "ef7d2b17-658d-4af6-b15d-5bdadcc4f021"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.041986 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-kube-api-access-x848k" (OuterVolumeSpecName: "kube-api-access-x848k") pod "ef7d2b17-658d-4af6-b15d-5bdadcc4f021" (UID: "ef7d2b17-658d-4af6-b15d-5bdadcc4f021"). InnerVolumeSpecName "kube-api-access-x848k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.051100 5131 scope.go:117] "RemoveContainer" containerID="2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f" Jan 07 09:52:33 crc kubenswrapper[5131]: E0107 09:52:33.051544 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f\": container with ID starting with 2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f not found: ID does not exist" containerID="2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.051590 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f"} err="failed to get container status \"2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f\": rpc error: code = NotFound desc = could not find container \"2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f\": container with ID starting with 2f9abb0d24c5647851c5e089a4878080d0c36c35271eaa995b3947fcec80867f not found: ID does not exist" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.051614 5131 scope.go:117] "RemoveContainer" containerID="857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35" Jan 07 09:52:33 crc kubenswrapper[5131]: E0107 09:52:33.051896 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35\": container with ID starting with 857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35 not found: ID does not exist" containerID="857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.051927 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35"} err="failed to get container status \"857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35\": rpc error: code = NotFound desc = could not find container \"857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35\": container with ID starting with 857b11b53769150553b35062ddbeb2c9ecd14f5b756053194cc21606f43b8d35 not found: ID does not exist" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.051944 5131 scope.go:117] "RemoveContainer" containerID="7668655bf8771597d010c576ae3c1d166afe37fbf39f59d506e0840b2a8a9fbd" Jan 07 09:52:33 crc kubenswrapper[5131]: E0107 09:52:33.052206 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7668655bf8771597d010c576ae3c1d166afe37fbf39f59d506e0840b2a8a9fbd\": container with ID starting with 7668655bf8771597d010c576ae3c1d166afe37fbf39f59d506e0840b2a8a9fbd not found: ID does not exist" containerID="7668655bf8771597d010c576ae3c1d166afe37fbf39f59d506e0840b2a8a9fbd" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.052233 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7668655bf8771597d010c576ae3c1d166afe37fbf39f59d506e0840b2a8a9fbd"} err="failed to get container status \"7668655bf8771597d010c576ae3c1d166afe37fbf39f59d506e0840b2a8a9fbd\": rpc error: code = NotFound desc = could not find container \"7668655bf8771597d010c576ae3c1d166afe37fbf39f59d506e0840b2a8a9fbd\": container with ID starting with 7668655bf8771597d010c576ae3c1d166afe37fbf39f59d506e0840b2a8a9fbd not found: ID does not exist" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.052257 5131 scope.go:117] "RemoveContainer" containerID="33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.066188 5131 scope.go:117] "RemoveContainer" containerID="2497c3f802b4302661be5d7104fbbdd07364f25b15b2c8f28bdd81703fef770b" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.079696 5131 scope.go:117] "RemoveContainer" containerID="a06c9983858c1d3af015868ff0c1db73693d4788b323d4d88f5660cb86b80176" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.098456 5131 scope.go:117] "RemoveContainer" containerID="33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425" Jan 07 09:52:33 crc kubenswrapper[5131]: E0107 09:52:33.098818 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425\": container with ID starting with 33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425 not found: ID does not exist" containerID="33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.099386 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425"} err="failed to get container status \"33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425\": rpc error: code = NotFound desc = could not find container \"33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425\": container with ID starting with 33e781867b9cf24ea5bfb7828025c249cf82ddf2aefe458372eb3b6571d38425 not found: ID does not exist" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.099417 5131 scope.go:117] "RemoveContainer" containerID="2497c3f802b4302661be5d7104fbbdd07364f25b15b2c8f28bdd81703fef770b" Jan 07 09:52:33 crc kubenswrapper[5131]: E0107 09:52:33.099743 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2497c3f802b4302661be5d7104fbbdd07364f25b15b2c8f28bdd81703fef770b\": container with ID starting with 2497c3f802b4302661be5d7104fbbdd07364f25b15b2c8f28bdd81703fef770b not found: ID does not exist" containerID="2497c3f802b4302661be5d7104fbbdd07364f25b15b2c8f28bdd81703fef770b" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.099795 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2497c3f802b4302661be5d7104fbbdd07364f25b15b2c8f28bdd81703fef770b"} err="failed to get container status \"2497c3f802b4302661be5d7104fbbdd07364f25b15b2c8f28bdd81703fef770b\": rpc error: code = NotFound desc = could not find container \"2497c3f802b4302661be5d7104fbbdd07364f25b15b2c8f28bdd81703fef770b\": container with ID starting with 2497c3f802b4302661be5d7104fbbdd07364f25b15b2c8f28bdd81703fef770b not found: ID does not exist" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.099850 5131 scope.go:117] "RemoveContainer" containerID="a06c9983858c1d3af015868ff0c1db73693d4788b323d4d88f5660cb86b80176" Jan 07 09:52:33 crc kubenswrapper[5131]: E0107 09:52:33.100128 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06c9983858c1d3af015868ff0c1db73693d4788b323d4d88f5660cb86b80176\": container with ID starting with a06c9983858c1d3af015868ff0c1db73693d4788b323d4d88f5660cb86b80176 not found: ID does not exist" containerID="a06c9983858c1d3af015868ff0c1db73693d4788b323d4d88f5660cb86b80176" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.100156 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06c9983858c1d3af015868ff0c1db73693d4788b323d4d88f5660cb86b80176"} err="failed to get container status \"a06c9983858c1d3af015868ff0c1db73693d4788b323d4d88f5660cb86b80176\": rpc error: code = NotFound desc = could not find container \"a06c9983858c1d3af015868ff0c1db73693d4788b323d4d88f5660cb86b80176\": container with ID starting with a06c9983858c1d3af015868ff0c1db73693d4788b323d4d88f5660cb86b80176 not found: ID does not exist" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.129751 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x848k\" (UniqueName: \"kubernetes.io/projected/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-kube-api-access-x848k\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.130143 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.132170 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef7d2b17-658d-4af6-b15d-5bdadcc4f021" (UID: "ef7d2b17-658d-4af6-b15d-5bdadcc4f021"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.231064 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7d2b17-658d-4af6-b15d-5bdadcc4f021-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.307588 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6gww5"] Jan 07 09:52:33 crc kubenswrapper[5131]: E0107 09:52:33.313959 5131 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef7d2b17_658d_4af6_b15d_5bdadcc4f021.slice/crio-ba3cedaea69b56b88b2cbf2af9fb9623150186efe957bac1943c967468850e47\": RecentStats: unable to find data in memory cache]" Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.315285 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6gww5"] Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.990169 5131 generic.go:358] "Generic (PLEG): container finished" podID="5ba9735b-5cfb-4fbe-89dc-ff93a61da881" containerID="d15e4503ed01a822bfabaada96f5ea7510dc88d73ffee9ee9d2b12f966a3682a" exitCode=0 Jan 07 09:52:33 crc kubenswrapper[5131]: I0107 09:52:33.990224 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5ba9735b-5cfb-4fbe-89dc-ff93a61da881","Type":"ContainerDied","Data":"d15e4503ed01a822bfabaada96f5ea7510dc88d73ffee9ee9d2b12f966a3682a"} Jan 07 09:52:34 crc kubenswrapper[5131]: I0107 09:52:34.191135 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b0639f-c0b9-4140-af54-4da733719edb" path="/var/lib/kubelet/pods/17b0639f-c0b9-4140-af54-4da733719edb/volumes" Jan 07 09:52:34 crc kubenswrapper[5131]: I0107 09:52:34.191908 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" path="/var/lib/kubelet/pods/ef7d2b17-658d-4af6-b15d-5bdadcc4f021/volumes" Jan 07 09:52:35 crc kubenswrapper[5131]: I0107 09:52:35.213134 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 07 09:52:35 crc kubenswrapper[5131]: I0107 09:52:35.274417 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kubelet-dir\") pod \"5ba9735b-5cfb-4fbe-89dc-ff93a61da881\" (UID: \"5ba9735b-5cfb-4fbe-89dc-ff93a61da881\") " Jan 07 09:52:35 crc kubenswrapper[5131]: I0107 09:52:35.274524 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kube-api-access\") pod \"5ba9735b-5cfb-4fbe-89dc-ff93a61da881\" (UID: \"5ba9735b-5cfb-4fbe-89dc-ff93a61da881\") " Jan 07 09:52:35 crc kubenswrapper[5131]: I0107 09:52:35.274552 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5ba9735b-5cfb-4fbe-89dc-ff93a61da881" (UID: "5ba9735b-5cfb-4fbe-89dc-ff93a61da881"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:52:35 crc kubenswrapper[5131]: I0107 09:52:35.274708 5131 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:35 crc kubenswrapper[5131]: I0107 09:52:35.282367 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5ba9735b-5cfb-4fbe-89dc-ff93a61da881" (UID: "5ba9735b-5cfb-4fbe-89dc-ff93a61da881"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:52:35 crc kubenswrapper[5131]: I0107 09:52:35.376059 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ba9735b-5cfb-4fbe-89dc-ff93a61da881-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 09:52:36 crc kubenswrapper[5131]: I0107 09:52:36.007746 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5ba9735b-5cfb-4fbe-89dc-ff93a61da881","Type":"ContainerDied","Data":"752ec58dbead39409a66c7aa92ca8d31007952c839e6b18e97ec963d1eabac36"} Jan 07 09:52:36 crc kubenswrapper[5131]: I0107 09:52:36.007788 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="752ec58dbead39409a66c7aa92ca8d31007952c839e6b18e97ec963d1eabac36" Jan 07 09:52:36 crc kubenswrapper[5131]: I0107 09:52:36.007545 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.778092 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779339 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17b0639f-c0b9-4140-af54-4da733719edb" containerName="extract-content" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779356 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b0639f-c0b9-4140-af54-4da733719edb" containerName="extract-content" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779364 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17b0639f-c0b9-4140-af54-4da733719edb" containerName="registry-server" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779373 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b0639f-c0b9-4140-af54-4da733719edb" containerName="registry-server" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779390 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" containerName="extract-content" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779396 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" containerName="extract-content" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779406 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" containerName="registry-server" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779412 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" containerName="registry-server" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779426 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ba9735b-5cfb-4fbe-89dc-ff93a61da881" containerName="pruner" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779432 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba9735b-5cfb-4fbe-89dc-ff93a61da881" containerName="pruner" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779449 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17b0639f-c0b9-4140-af54-4da733719edb" containerName="extract-utilities" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779457 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b0639f-c0b9-4140-af54-4da733719edb" containerName="extract-utilities" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779486 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" containerName="extract-utilities" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779494 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" containerName="extract-utilities" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779613 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="ef7d2b17-658d-4af6-b15d-5bdadcc4f021" containerName="registry-server" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779631 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ba9735b-5cfb-4fbe-89dc-ff93a61da881" containerName="pruner" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.779643 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="17b0639f-c0b9-4140-af54-4da733719edb" containerName="registry-server" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.861965 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.862121 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.864493 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 07 09:52:38 crc kubenswrapper[5131]: I0107 09:52:38.864572 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 07 09:52:39 crc kubenswrapper[5131]: I0107 09:52:39.024347 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kubelet-dir\") pod \"installer-12-crc\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:52:39 crc kubenswrapper[5131]: I0107 09:52:39.024411 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kube-api-access\") pod \"installer-12-crc\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:52:39 crc kubenswrapper[5131]: I0107 09:52:39.024515 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-var-lock\") pod \"installer-12-crc\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:52:39 crc kubenswrapper[5131]: I0107 09:52:39.125698 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kube-api-access\") pod \"installer-12-crc\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:52:39 crc kubenswrapper[5131]: I0107 09:52:39.125772 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-var-lock\") pod \"installer-12-crc\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:52:39 crc kubenswrapper[5131]: I0107 09:52:39.125845 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kubelet-dir\") pod \"installer-12-crc\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:52:39 crc kubenswrapper[5131]: I0107 09:52:39.125924 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kubelet-dir\") pod \"installer-12-crc\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:52:39 crc kubenswrapper[5131]: I0107 09:52:39.125939 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-var-lock\") pod \"installer-12-crc\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:52:39 crc kubenswrapper[5131]: I0107 09:52:39.147320 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kube-api-access\") pod \"installer-12-crc\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:52:39 crc kubenswrapper[5131]: I0107 09:52:39.187900 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:52:39 crc kubenswrapper[5131]: I0107 09:52:39.373981 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 07 09:52:40 crc kubenswrapper[5131]: I0107 09:52:40.026908 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2af676b7-b75c-4dae-98d9-9caa20f87c9b","Type":"ContainerStarted","Data":"50619c1fca40b9acac641537675c56f59d8dd697544cc4f9b1532b686f364d6a"} Jan 07 09:52:40 crc kubenswrapper[5131]: I0107 09:52:40.026953 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2af676b7-b75c-4dae-98d9-9caa20f87c9b","Type":"ContainerStarted","Data":"dd63402525cb415eb21ae19e3234f9ca0cee5c59d2c6c7f9431d0490ee0d24c7"} Jan 07 09:52:40 crc kubenswrapper[5131]: I0107 09:52:40.042795 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.042779244 podStartE2EDuration="2.042779244s" podCreationTimestamp="2026-01-07 09:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:52:40.042447145 +0000 UTC m=+188.208748709" watchObservedRunningTime="2026-01-07 09:52:40.042779244 +0000 UTC m=+188.209080808" Jan 07 09:52:46 crc kubenswrapper[5131]: I0107 09:52:46.704543 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-sftp2"] Jan 07 09:53:11 crc kubenswrapper[5131]: I0107 09:53:11.013968 5131 ???:1] "http: TLS handshake error from 192.168.126.11:36088: no serving certificate available for the kubelet" Jan 07 09:53:11 crc kubenswrapper[5131]: I0107 09:53:11.731963 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" podUID="8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" containerName="oauth-openshift" containerID="cri-o://ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414" gracePeriod=15 Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.234204 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.234255 5131 generic.go:358] "Generic (PLEG): container finished" podID="8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" containerID="ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414" exitCode=0 Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.234291 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" event={"ID":"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4","Type":"ContainerDied","Data":"ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414"} Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.235862 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" event={"ID":"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4","Type":"ContainerDied","Data":"df9f7b11cf60fa03b49500b61024f159737cd6933dda063ebebbb2608753f64f"} Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.235916 5131 scope.go:117] "RemoveContainer" containerID="ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.266197 5131 scope.go:117] "RemoveContainer" containerID="ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414" Jan 07 09:53:12 crc kubenswrapper[5131]: E0107 09:53:12.277899 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414\": container with ID starting with ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414 not found: ID does not exist" containerID="ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.278432 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414"} err="failed to get container status \"ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414\": rpc error: code = NotFound desc = could not find container \"ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414\": container with ID starting with ac3775a0a3c0f7d75fe9355702065cbe264fb91ef65e8d449bae80b6bf815414 not found: ID does not exist" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.278983 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56d465957c-s6tz2"] Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.286568 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" containerName="oauth-openshift" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.286623 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" containerName="oauth-openshift" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.286902 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" containerName="oauth-openshift" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.297985 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.301860 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56d465957c-s6tz2"] Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.320585 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-dir\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.320663 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-error\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.320691 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-login\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.320773 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-ocp-branding-template\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.320799 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-policies\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.320851 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-router-certs\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.320880 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-trusted-ca-bundle\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.320932 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-service-ca\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.320963 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-idp-0-file-data\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.321002 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-session\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.321035 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-cliconfig\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.321058 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-serving-cert\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.321081 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7pg4\" (UniqueName: \"kubernetes.io/projected/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-kube-api-access-q7pg4\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.321102 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-provider-selection\") pod \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\" (UID: \"8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4\") " Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.322081 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.325329 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.325939 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.325963 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.326574 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.330471 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.330499 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.330532 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.347375 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.347403 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.347581 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.347623 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-kube-api-access-q7pg4" (OuterVolumeSpecName: "kube-api-access-q7pg4") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "kube-api-access-q7pg4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.347632 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.351375 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" (UID: "8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422352 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-audit-policies\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422414 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-template-login\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422452 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-template-error\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422476 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422574 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422653 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422702 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422792 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422814 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422897 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422938 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7961f9-ac7b-4b81-a600-d21b013659c9-audit-dir\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.422963 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr9nr\" (UniqueName: \"kubernetes.io/projected/6d7961f9-ac7b-4b81-a600-d21b013659c9-kube-api-access-qr9nr\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423070 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-session\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423137 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423218 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423239 5131 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423256 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423269 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423281 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423293 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423305 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423318 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423331 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423344 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q7pg4\" (UniqueName: \"kubernetes.io/projected/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-kube-api-access-q7pg4\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423356 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423372 5131 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423384 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.423396 5131 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.524671 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-template-error\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.524743 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.524796 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.524875 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.525055 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.525245 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.525281 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.525537 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.525723 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7961f9-ac7b-4b81-a600-d21b013659c9-audit-dir\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.526054 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7961f9-ac7b-4b81-a600-d21b013659c9-audit-dir\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.526695 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.526728 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.527307 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.527413 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9nr\" (UniqueName: \"kubernetes.io/projected/6d7961f9-ac7b-4b81-a600-d21b013659c9-kube-api-access-qr9nr\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.527546 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-session\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.528302 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.528381 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-audit-policies\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.528465 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-template-login\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.529542 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7961f9-ac7b-4b81-a600-d21b013659c9-audit-policies\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.534606 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.535186 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-template-error\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.535406 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-template-login\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.537166 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-session\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.538029 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.538134 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.538549 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.541600 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7961f9-ac7b-4b81-a600-d21b013659c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.561076 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr9nr\" (UniqueName: \"kubernetes.io/projected/6d7961f9-ac7b-4b81-a600-d21b013659c9-kube-api-access-qr9nr\") pod \"oauth-openshift-56d465957c-s6tz2\" (UID: \"6d7961f9-ac7b-4b81-a600-d21b013659c9\") " pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.615869 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:12 crc kubenswrapper[5131]: I0107 09:53:12.823439 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56d465957c-s6tz2"] Jan 07 09:53:13 crc kubenswrapper[5131]: I0107 09:53:13.244645 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" event={"ID":"6d7961f9-ac7b-4b81-a600-d21b013659c9","Type":"ContainerStarted","Data":"6297bf3d5db7d88ffb92a905ad0193b8c9f7dc07322d26a66eb5aff693bbfce8"} Jan 07 09:53:13 crc kubenswrapper[5131]: I0107 09:53:13.245072 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" event={"ID":"6d7961f9-ac7b-4b81-a600-d21b013659c9","Type":"ContainerStarted","Data":"9bb0e6159ddb6f60a71c75fbb11e676c6ebeb440a2655dd870a8047d81d50073"} Jan 07 09:53:13 crc kubenswrapper[5131]: I0107 09:53:13.245091 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:13 crc kubenswrapper[5131]: I0107 09:53:13.249300 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-sftp2" Jan 07 09:53:13 crc kubenswrapper[5131]: I0107 09:53:13.273565 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" podStartSLOduration=27.273530206 podStartE2EDuration="27.273530206s" podCreationTimestamp="2026-01-07 09:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:53:13.268996586 +0000 UTC m=+221.435298250" watchObservedRunningTime="2026-01-07 09:53:13.273530206 +0000 UTC m=+221.439831860" Jan 07 09:53:13 crc kubenswrapper[5131]: I0107 09:53:13.312885 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-sftp2"] Jan 07 09:53:13 crc kubenswrapper[5131]: I0107 09:53:13.317444 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-sftp2"] Jan 07 09:53:13 crc kubenswrapper[5131]: I0107 09:53:13.686702 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56d465957c-s6tz2" Jan 07 09:53:14 crc kubenswrapper[5131]: I0107 09:53:14.191228 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4" path="/var/lib/kubelet/pods/8d4e6f6e-ebfa-4be2-9c9d-55fb8b75b9f4/volumes" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.754747 5131 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.792483 5131 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.792567 5131 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793375 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793408 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793421 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793406 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793503 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e" gracePeriod=15 Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793427 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793658 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793329 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377" gracePeriod=15 Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793505 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93" gracePeriod=15 Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793675 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793775 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793441 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66" gracePeriod=15 Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793452 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a" gracePeriod=15 Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793790 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793980 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.793996 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794045 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794057 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794081 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794093 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794115 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794127 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794375 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794416 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794442 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794479 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794499 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794519 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794542 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794761 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.794792 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.795140 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.810593 5131 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.825183 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.825246 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.825310 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.825357 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.825397 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.825464 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.825516 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.825574 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.825637 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.825678 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.856440 5131 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.926646 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.926715 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.926755 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.926800 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.926859 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.926921 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.926971 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.927024 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.927058 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.927083 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.927182 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.927226 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.927253 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.927278 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.927303 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.928393 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.928449 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.928664 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.928706 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:17 crc kubenswrapper[5131]: I0107 09:53:17.928773 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:18 crc kubenswrapper[5131]: I0107 09:53:18.292269 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 07 09:53:18 crc kubenswrapper[5131]: I0107 09:53:18.294043 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 07 09:53:18 crc kubenswrapper[5131]: I0107 09:53:18.294963 5131 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e" exitCode=0 Jan 07 09:53:18 crc kubenswrapper[5131]: I0107 09:53:18.295004 5131 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66" exitCode=0 Jan 07 09:53:18 crc kubenswrapper[5131]: I0107 09:53:18.295014 5131 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93" exitCode=0 Jan 07 09:53:18 crc kubenswrapper[5131]: I0107 09:53:18.295028 5131 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a" exitCode=2 Jan 07 09:53:18 crc kubenswrapper[5131]: I0107 09:53:18.295041 5131 scope.go:117] "RemoveContainer" containerID="9ce5c5322e4dfa939241d2f3f807c9d150117431e391c5f986a200413b054a33" Jan 07 09:53:19 crc kubenswrapper[5131]: I0107 09:53:19.307044 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.330634 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.332245 5131 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377" exitCode=0 Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.663302 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.663408 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 09:53:20 crc kubenswrapper[5131]: E0107 09:53:20.664325 5131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": dial tcp 38.102.83.220:6443: connect: connection refused" event=< Jan 07 09:53:20 crc kubenswrapper[5131]: &Event{ObjectMeta:{machine-config-daemon-dvdrn.18886a2e42b8d1b0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-dvdrn,UID:3942e752-44ba-4678-8723-6cd778e60d73,APIVersion:v1,ResourceVersion:36236,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Jan 07 09:53:20 crc kubenswrapper[5131]: body: Jan 07 09:53:20 crc kubenswrapper[5131]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:53:20.663368112 +0000 UTC m=+228.829669706,LastTimestamp:2026-01-07 09:53:20.663368112 +0000 UTC m=+228.829669706,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 07 09:53:20 crc kubenswrapper[5131]: > Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.716020 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.717902 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.718470 5131 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.772250 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.772410 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.772450 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.772543 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.772645 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.772710 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.772805 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.772943 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.773139 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.773313 5131 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.773344 5131 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.773366 5131 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.773385 5131 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.775941 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:53:20 crc kubenswrapper[5131]: I0107 09:53:20.875744 5131 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:21 crc kubenswrapper[5131]: I0107 09:53:21.343500 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 07 09:53:21 crc kubenswrapper[5131]: I0107 09:53:21.344705 5131 scope.go:117] "RemoveContainer" containerID="81989c7ac801be354f6f1e78382dbefc67b72ef6a85367ea48e04fc6ff4f128e" Jan 07 09:53:21 crc kubenswrapper[5131]: I0107 09:53:21.345017 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:21 crc kubenswrapper[5131]: I0107 09:53:21.365466 5131 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:21 crc kubenswrapper[5131]: I0107 09:53:21.365503 5131 scope.go:117] "RemoveContainer" containerID="f7200e5d1d13d232ad67de2ea89381542d858c266de6b68e33bfe97a520bfd66" Jan 07 09:53:21 crc kubenswrapper[5131]: I0107 09:53:21.387469 5131 scope.go:117] "RemoveContainer" containerID="222177f33dbcd646941928b01ab9b05233038233497ca1767fba6f7706b3dc93" Jan 07 09:53:21 crc kubenswrapper[5131]: I0107 09:53:21.414369 5131 scope.go:117] "RemoveContainer" containerID="f94695038c6c0633c279f363909c7c60ac6e6487469757ddfa9a64766e9ad38a" Jan 07 09:53:21 crc kubenswrapper[5131]: I0107 09:53:21.428794 5131 scope.go:117] "RemoveContainer" containerID="9ab1390be253b0acce2b38b656b6ab5fb3b2b0b0df6b0bf4aa1c9a6706d5b377" Jan 07 09:53:21 crc kubenswrapper[5131]: I0107 09:53:21.446160 5131 scope.go:117] "RemoveContainer" containerID="dd6371190af55f4fae494e73d316e7347f26bd60e0b02bc18c31ce1cf7f1bb9b" Jan 07 09:53:22 crc kubenswrapper[5131]: I0107 09:53:22.189031 5131 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:22 crc kubenswrapper[5131]: I0107 09:53:22.190340 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 07 09:53:22 crc kubenswrapper[5131]: E0107 09:53:22.858563 5131 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.220:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:22 crc kubenswrapper[5131]: I0107 09:53:22.859525 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:23 crc kubenswrapper[5131]: I0107 09:53:23.363018 5131 generic.go:358] "Generic (PLEG): container finished" podID="2af676b7-b75c-4dae-98d9-9caa20f87c9b" containerID="50619c1fca40b9acac641537675c56f59d8dd697544cc4f9b1532b686f364d6a" exitCode=0 Jan 07 09:53:23 crc kubenswrapper[5131]: I0107 09:53:23.363088 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2af676b7-b75c-4dae-98d9-9caa20f87c9b","Type":"ContainerDied","Data":"50619c1fca40b9acac641537675c56f59d8dd697544cc4f9b1532b686f364d6a"} Jan 07 09:53:23 crc kubenswrapper[5131]: I0107 09:53:23.364157 5131 status_manager.go:895] "Failed to get status for pod" podUID="2af676b7-b75c-4dae-98d9-9caa20f87c9b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:23 crc kubenswrapper[5131]: I0107 09:53:23.364995 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"8b39e8a0353ed9c99ba1b3d63d7b023d4e6bc93f90148867b373466596497aaf"} Jan 07 09:53:23 crc kubenswrapper[5131]: I0107 09:53:23.365044 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"435c085d76429b5a24ad06b0cd95eb00d1f51f9161da90b7c39acfe631ae65aa"} Jan 07 09:53:23 crc kubenswrapper[5131]: I0107 09:53:23.365425 5131 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:23 crc kubenswrapper[5131]: I0107 09:53:23.365732 5131 status_manager.go:895] "Failed to get status for pod" podUID="2af676b7-b75c-4dae-98d9-9caa20f87c9b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:23 crc kubenswrapper[5131]: E0107 09:53:23.365824 5131 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.220:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:53:24 crc kubenswrapper[5131]: I0107 09:53:24.644470 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:53:24 crc kubenswrapper[5131]: I0107 09:53:24.645252 5131 status_manager.go:895] "Failed to get status for pod" podUID="2af676b7-b75c-4dae-98d9-9caa20f87c9b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:24 crc kubenswrapper[5131]: I0107 09:53:24.737161 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kubelet-dir\") pod \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " Jan 07 09:53:24 crc kubenswrapper[5131]: I0107 09:53:24.737297 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-var-lock\") pod \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " Jan 07 09:53:24 crc kubenswrapper[5131]: I0107 09:53:24.737323 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2af676b7-b75c-4dae-98d9-9caa20f87c9b" (UID: "2af676b7-b75c-4dae-98d9-9caa20f87c9b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:53:24 crc kubenswrapper[5131]: I0107 09:53:24.737374 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-var-lock" (OuterVolumeSpecName: "var-lock") pod "2af676b7-b75c-4dae-98d9-9caa20f87c9b" (UID: "2af676b7-b75c-4dae-98d9-9caa20f87c9b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:53:24 crc kubenswrapper[5131]: I0107 09:53:24.737447 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kube-api-access\") pod \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\" (UID: \"2af676b7-b75c-4dae-98d9-9caa20f87c9b\") " Jan 07 09:53:24 crc kubenswrapper[5131]: I0107 09:53:24.737984 5131 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:24 crc kubenswrapper[5131]: I0107 09:53:24.738004 5131 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:24 crc kubenswrapper[5131]: I0107 09:53:24.746917 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2af676b7-b75c-4dae-98d9-9caa20f87c9b" (UID: "2af676b7-b75c-4dae-98d9-9caa20f87c9b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:53:24 crc kubenswrapper[5131]: I0107 09:53:24.840049 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2af676b7-b75c-4dae-98d9-9caa20f87c9b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 07 09:53:25 crc kubenswrapper[5131]: I0107 09:53:25.382256 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 07 09:53:25 crc kubenswrapper[5131]: I0107 09:53:25.382284 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2af676b7-b75c-4dae-98d9-9caa20f87c9b","Type":"ContainerDied","Data":"dd63402525cb415eb21ae19e3234f9ca0cee5c59d2c6c7f9431d0490ee0d24c7"} Jan 07 09:53:25 crc kubenswrapper[5131]: I0107 09:53:25.382346 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd63402525cb415eb21ae19e3234f9ca0cee5c59d2c6c7f9431d0490ee0d24c7" Jan 07 09:53:25 crc kubenswrapper[5131]: I0107 09:53:25.407039 5131 status_manager.go:895] "Failed to get status for pod" podUID="2af676b7-b75c-4dae-98d9-9caa20f87c9b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:27 crc kubenswrapper[5131]: E0107 09:53:27.101365 5131 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:27 crc kubenswrapper[5131]: E0107 09:53:27.102386 5131 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:27 crc kubenswrapper[5131]: E0107 09:53:27.103132 5131 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:27 crc kubenswrapper[5131]: E0107 09:53:27.103584 5131 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:27 crc kubenswrapper[5131]: E0107 09:53:27.104131 5131 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:27 crc kubenswrapper[5131]: I0107 09:53:27.104194 5131 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 07 09:53:27 crc kubenswrapper[5131]: E0107 09:53:27.104597 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="200ms" Jan 07 09:53:27 crc kubenswrapper[5131]: E0107 09:53:27.305197 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="400ms" Jan 07 09:53:27 crc kubenswrapper[5131]: E0107 09:53:27.706101 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="800ms" Jan 07 09:53:28 crc kubenswrapper[5131]: E0107 09:53:28.477770 5131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": dial tcp 38.102.83.220:6443: connect: connection refused" event=< Jan 07 09:53:28 crc kubenswrapper[5131]: &Event{ObjectMeta:{machine-config-daemon-dvdrn.18886a2e42b8d1b0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-dvdrn,UID:3942e752-44ba-4678-8723-6cd778e60d73,APIVersion:v1,ResourceVersion:36236,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Jan 07 09:53:28 crc kubenswrapper[5131]: body: Jan 07 09:53:28 crc kubenswrapper[5131]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-07 09:53:20.663368112 +0000 UTC m=+228.829669706,LastTimestamp:2026-01-07 09:53:20.663368112 +0000 UTC m=+228.829669706,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 07 09:53:28 crc kubenswrapper[5131]: > Jan 07 09:53:28 crc kubenswrapper[5131]: E0107 09:53:28.507912 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="1.6s" Jan 07 09:53:29 crc kubenswrapper[5131]: I0107 09:53:29.180142 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:29 crc kubenswrapper[5131]: I0107 09:53:29.181561 5131 status_manager.go:895] "Failed to get status for pod" podUID="2af676b7-b75c-4dae-98d9-9caa20f87c9b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:29 crc kubenswrapper[5131]: I0107 09:53:29.205732 5131 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37a8b62c-1e16-4bf4-8a1a-7e21eea28a36" Jan 07 09:53:29 crc kubenswrapper[5131]: I0107 09:53:29.205777 5131 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37a8b62c-1e16-4bf4-8a1a-7e21eea28a36" Jan 07 09:53:29 crc kubenswrapper[5131]: E0107 09:53:29.206334 5131 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:29 crc kubenswrapper[5131]: I0107 09:53:29.206752 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:29 crc kubenswrapper[5131]: W0107 09:53:29.242760 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-1b477688ab9fe962d917575cc5266ce3ca1a38af50c73a6621a2dc19d818afe3 WatchSource:0}: Error finding container 1b477688ab9fe962d917575cc5266ce3ca1a38af50c73a6621a2dc19d818afe3: Status 404 returned error can't find the container with id 1b477688ab9fe962d917575cc5266ce3ca1a38af50c73a6621a2dc19d818afe3 Jan 07 09:53:29 crc kubenswrapper[5131]: I0107 09:53:29.410518 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1b477688ab9fe962d917575cc5266ce3ca1a38af50c73a6621a2dc19d818afe3"} Jan 07 09:53:30 crc kubenswrapper[5131]: E0107 09:53:30.109313 5131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="3.2s" Jan 07 09:53:30 crc kubenswrapper[5131]: I0107 09:53:30.418922 5131 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="9af3cd7a4affba9601ad9a347e59f51a5f7844a526bea952c54f692b2386b8de" exitCode=0 Jan 07 09:53:30 crc kubenswrapper[5131]: I0107 09:53:30.419083 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"9af3cd7a4affba9601ad9a347e59f51a5f7844a526bea952c54f692b2386b8de"} Jan 07 09:53:30 crc kubenswrapper[5131]: I0107 09:53:30.419479 5131 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37a8b62c-1e16-4bf4-8a1a-7e21eea28a36" Jan 07 09:53:30 crc kubenswrapper[5131]: I0107 09:53:30.419518 5131 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37a8b62c-1e16-4bf4-8a1a-7e21eea28a36" Jan 07 09:53:30 crc kubenswrapper[5131]: E0107 09:53:30.420167 5131 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:30 crc kubenswrapper[5131]: I0107 09:53:30.420178 5131 status_manager.go:895] "Failed to get status for pod" podUID="2af676b7-b75c-4dae-98d9-9caa20f87c9b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 07 09:53:31 crc kubenswrapper[5131]: I0107 09:53:31.443649 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 07 09:53:31 crc kubenswrapper[5131]: I0107 09:53:31.443982 5131 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f" exitCode=1 Jan 07 09:53:31 crc kubenswrapper[5131]: I0107 09:53:31.444154 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f"} Jan 07 09:53:31 crc kubenswrapper[5131]: I0107 09:53:31.444599 5131 scope.go:117] "RemoveContainer" containerID="81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f" Jan 07 09:53:31 crc kubenswrapper[5131]: I0107 09:53:31.448443 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"bf9894cd9ad4f83d08d88e91b7ce1bc183c5433791005497adb5af1f20d73d86"} Jan 07 09:53:31 crc kubenswrapper[5131]: I0107 09:53:31.448467 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d376a19b49beb937d8a54662af197f38ecbc87af8d2846ab233086c13b2e1c8d"} Jan 07 09:53:31 crc kubenswrapper[5131]: I0107 09:53:31.448495 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f0abb3495cadd9467902c077891543c916a12887d9479d89ff413213af423a02"} Jan 07 09:53:32 crc kubenswrapper[5131]: I0107 09:53:32.457332 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 07 09:53:32 crc kubenswrapper[5131]: I0107 09:53:32.457808 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"86f2e886e98414f770131acffb590ca4566dfc410f9760b1c0bbda5529368740"} Jan 07 09:53:32 crc kubenswrapper[5131]: I0107 09:53:32.462032 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"227daa30e68592e2b3bb543ca601c6c64a40af43efc679c7a5792b47128b2e85"} Jan 07 09:53:32 crc kubenswrapper[5131]: I0107 09:53:32.462095 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"cdc8edf173905fcd92a8025d31f3949dc3a9bff65f4b2ddafc57cc4d4516764a"} Jan 07 09:53:32 crc kubenswrapper[5131]: I0107 09:53:32.462207 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:32 crc kubenswrapper[5131]: I0107 09:53:32.462315 5131 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37a8b62c-1e16-4bf4-8a1a-7e21eea28a36" Jan 07 09:53:32 crc kubenswrapper[5131]: I0107 09:53:32.462342 5131 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37a8b62c-1e16-4bf4-8a1a-7e21eea28a36" Jan 07 09:53:33 crc kubenswrapper[5131]: I0107 09:53:33.715489 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:53:33 crc kubenswrapper[5131]: I0107 09:53:33.715792 5131 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 07 09:53:33 crc kubenswrapper[5131]: I0107 09:53:33.716077 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 07 09:53:34 crc kubenswrapper[5131]: I0107 09:53:34.207617 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:34 crc kubenswrapper[5131]: I0107 09:53:34.207661 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:34 crc kubenswrapper[5131]: I0107 09:53:34.215899 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:34 crc kubenswrapper[5131]: I0107 09:53:34.278486 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:53:37 crc kubenswrapper[5131]: I0107 09:53:37.820166 5131 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:37 crc kubenswrapper[5131]: I0107 09:53:37.820807 5131 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:37 crc kubenswrapper[5131]: I0107 09:53:37.972912 5131 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="0832ac98-7561-4d91-9946-09e3a8715bf4" Jan 07 09:53:38 crc kubenswrapper[5131]: I0107 09:53:38.504460 5131 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37a8b62c-1e16-4bf4-8a1a-7e21eea28a36" Jan 07 09:53:38 crc kubenswrapper[5131]: I0107 09:53:38.504489 5131 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37a8b62c-1e16-4bf4-8a1a-7e21eea28a36" Jan 07 09:53:38 crc kubenswrapper[5131]: I0107 09:53:38.507956 5131 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="0832ac98-7561-4d91-9946-09e3a8715bf4" Jan 07 09:53:43 crc kubenswrapper[5131]: I0107 09:53:43.716012 5131 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 07 09:53:43 crc kubenswrapper[5131]: I0107 09:53:43.716101 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 07 09:53:47 crc kubenswrapper[5131]: I0107 09:53:47.906285 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:48 crc kubenswrapper[5131]: I0107 09:53:48.209939 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 07 09:53:48 crc kubenswrapper[5131]: I0107 09:53:48.263055 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:48 crc kubenswrapper[5131]: I0107 09:53:48.861697 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 07 09:53:49 crc kubenswrapper[5131]: I0107 09:53:49.180255 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 07 09:53:49 crc kubenswrapper[5131]: I0107 09:53:49.317765 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 07 09:53:49 crc kubenswrapper[5131]: I0107 09:53:49.340748 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 07 09:53:49 crc kubenswrapper[5131]: I0107 09:53:49.357978 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 07 09:53:49 crc kubenswrapper[5131]: I0107 09:53:49.513703 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:53:49 crc kubenswrapper[5131]: I0107 09:53:49.685671 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 07 09:53:49 crc kubenswrapper[5131]: I0107 09:53:49.928002 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 07 09:53:49 crc kubenswrapper[5131]: I0107 09:53:49.953583 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 07 09:53:50 crc kubenswrapper[5131]: I0107 09:53:50.123089 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 07 09:53:50 crc kubenswrapper[5131]: I0107 09:53:50.184604 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 07 09:53:50 crc kubenswrapper[5131]: I0107 09:53:50.284227 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 07 09:53:50 crc kubenswrapper[5131]: I0107 09:53:50.286458 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 07 09:53:50 crc kubenswrapper[5131]: I0107 09:53:50.397653 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 07 09:53:50 crc kubenswrapper[5131]: I0107 09:53:50.420904 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 07 09:53:50 crc kubenswrapper[5131]: I0107 09:53:50.548190 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 07 09:53:50 crc kubenswrapper[5131]: I0107 09:53:50.663490 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 09:53:50 crc kubenswrapper[5131]: I0107 09:53:50.663591 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 09:53:50 crc kubenswrapper[5131]: I0107 09:53:50.783080 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 07 09:53:50 crc kubenswrapper[5131]: I0107 09:53:50.912708 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.086879 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.209483 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.236209 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.270627 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.349237 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.431868 5131 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.440555 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.440658 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.446478 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.470797 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.47077002 podStartE2EDuration="14.47077002s" podCreationTimestamp="2026-01-07 09:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:53:51.466943341 +0000 UTC m=+259.633244915" watchObservedRunningTime="2026-01-07 09:53:51.47077002 +0000 UTC m=+259.637071624" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.591674 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.602035 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.642698 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.818996 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.942483 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.950287 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.984290 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 07 09:53:51 crc kubenswrapper[5131]: I0107 09:53:51.984537 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.002014 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.023198 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.023408 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.102064 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.137449 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.157050 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.469305 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.473641 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.614161 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.652659 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.736501 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.742135 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.805348 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.818497 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.818517 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.870056 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.911379 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:52 crc kubenswrapper[5131]: I0107 09:53:52.936021 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.128258 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.147796 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.203149 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.241521 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.298503 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.331055 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.333668 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.358982 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.395920 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.515760 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.529720 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.600292 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.624226 5131 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.656305 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.679458 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.694446 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.716638 5131 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.716739 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.716823 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.718012 5131 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"86f2e886e98414f770131acffb590ca4566dfc410f9760b1c0bbda5529368740"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.718227 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://86f2e886e98414f770131acffb590ca4566dfc410f9760b1c0bbda5529368740" gracePeriod=30 Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.733365 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.942339 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 07 09:53:53 crc kubenswrapper[5131]: I0107 09:53:53.985548 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.013683 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.238720 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.281564 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.284799 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.286220 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.385386 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.416154 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.419868 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.420709 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.471500 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.487750 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.526713 5131 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.561318 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.599009 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.618107 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.628625 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.676984 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.732157 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.766281 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.828782 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.832278 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.865181 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.885959 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.897329 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.909537 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.915228 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 07 09:53:54 crc kubenswrapper[5131]: I0107 09:53:54.958625 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.267742 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.295650 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.299576 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.322525 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.441532 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.592687 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.595869 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.646862 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.734016 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.790806 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.798741 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.801513 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.944746 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 07 09:53:55 crc kubenswrapper[5131]: I0107 09:53:55.980071 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.035091 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.137043 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.171139 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.222005 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.226158 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.246329 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.258354 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.281265 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.382308 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.416281 5131 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.438073 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.566584 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.602327 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.602635 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.623703 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.645444 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.739455 5131 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.779430 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 07 09:53:56 crc kubenswrapper[5131]: I0107 09:53:56.999053 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.013449 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.038432 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.042401 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.067759 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.091742 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.420470 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.493309 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.517663 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.644092 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.676158 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.745344 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.779562 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.831047 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.885231 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 07 09:53:57 crc kubenswrapper[5131]: I0107 09:53:57.916447 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.028957 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.080185 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.219165 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.228036 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.256029 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.309243 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.318020 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.348542 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.359237 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.393350 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.412749 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.461313 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.472370 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.519477 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.691011 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.848716 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 07 09:53:58 crc kubenswrapper[5131]: I0107 09:53:58.999408 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.012042 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.083137 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.121616 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.157330 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.245183 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.289584 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.355064 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.428858 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.495266 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.498495 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.527298 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.582411 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.794632 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.795783 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.826493 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.833885 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.842393 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 07 09:53:59 crc kubenswrapper[5131]: I0107 09:53:59.855206 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.086587 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.334962 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.365568 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.414050 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.431603 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.451588 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.480220 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.498755 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.519248 5131 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.520044 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://8b39e8a0353ed9c99ba1b3d63d7b023d4e6bc93f90148867b373466596497aaf" gracePeriod=5 Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.559967 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.560048 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.628667 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.736269 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.743296 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.861701 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.867126 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 07 09:54:00 crc kubenswrapper[5131]: I0107 09:54:00.945491 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.088458 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.092298 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.106063 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.195577 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.205791 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.479688 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.545241 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.603319 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.699124 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.707587 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.725803 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.731888 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.769957 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.865176 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.916726 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.934309 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 07 09:54:01 crc kubenswrapper[5131]: I0107 09:54:01.990013 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.051170 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.064000 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.124049 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.187868 5131 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.200773 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.272048 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.289212 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.373638 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.424440 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.438715 5131 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.465751 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.621177 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.671423 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.794234 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.807775 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.823766 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 07 09:54:02 crc kubenswrapper[5131]: I0107 09:54:02.986663 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 07 09:54:03 crc kubenswrapper[5131]: I0107 09:54:03.026390 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 07 09:54:03 crc kubenswrapper[5131]: I0107 09:54:03.205175 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 07 09:54:03 crc kubenswrapper[5131]: I0107 09:54:03.260901 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 07 09:54:03 crc kubenswrapper[5131]: I0107 09:54:03.337953 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 07 09:54:03 crc kubenswrapper[5131]: I0107 09:54:03.482085 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 07 09:54:03 crc kubenswrapper[5131]: I0107 09:54:03.499416 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 07 09:54:03 crc kubenswrapper[5131]: I0107 09:54:03.523382 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 07 09:54:03 crc kubenswrapper[5131]: I0107 09:54:03.594292 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 07 09:54:03 crc kubenswrapper[5131]: I0107 09:54:03.622063 5131 ???:1] "http: TLS handshake error from 192.168.126.11:35170: no serving certificate available for the kubelet" Jan 07 09:54:03 crc kubenswrapper[5131]: I0107 09:54:03.661727 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 07 09:54:03 crc kubenswrapper[5131]: I0107 09:54:03.838541 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 07 09:54:04 crc kubenswrapper[5131]: I0107 09:54:04.103714 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 07 09:54:04 crc kubenswrapper[5131]: I0107 09:54:04.192354 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:54:04 crc kubenswrapper[5131]: I0107 09:54:04.206635 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 07 09:54:04 crc kubenswrapper[5131]: I0107 09:54:04.392991 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 07 09:54:04 crc kubenswrapper[5131]: I0107 09:54:04.478553 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 07 09:54:04 crc kubenswrapper[5131]: I0107 09:54:04.495861 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 07 09:54:04 crc kubenswrapper[5131]: I0107 09:54:04.563722 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 07 09:54:04 crc kubenswrapper[5131]: I0107 09:54:04.730125 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 07 09:54:04 crc kubenswrapper[5131]: I0107 09:54:04.812506 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 07 09:54:04 crc kubenswrapper[5131]: I0107 09:54:04.818459 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 07 09:54:05 crc kubenswrapper[5131]: I0107 09:54:05.134172 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 07 09:54:05 crc kubenswrapper[5131]: I0107 09:54:05.346862 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 07 09:54:05 crc kubenswrapper[5131]: I0107 09:54:05.697479 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 07 09:54:05 crc kubenswrapper[5131]: I0107 09:54:05.698089 5131 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="8b39e8a0353ed9c99ba1b3d63d7b023d4e6bc93f90148867b373466596497aaf" exitCode=137 Jan 07 09:54:05 crc kubenswrapper[5131]: I0107 09:54:05.777549 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.134971 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.135072 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.137067 5131 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.249966 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.250046 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.250101 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.250298 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.250363 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.250316 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.250439 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.250587 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.250757 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.252296 5131 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.252322 5131 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.252334 5131 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.252345 5131 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.263607 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.353738 5131 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.708766 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.709042 5131 scope.go:117] "RemoveContainer" containerID="8b39e8a0353ed9c99ba1b3d63d7b023d4e6bc93f90148867b373466596497aaf" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.709122 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.712117 5131 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 07 09:54:06 crc kubenswrapper[5131]: I0107 09:54:06.741936 5131 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 07 09:54:07 crc kubenswrapper[5131]: I0107 09:54:07.592314 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 07 09:54:08 crc kubenswrapper[5131]: I0107 09:54:08.191525 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 07 09:54:20 crc kubenswrapper[5131]: I0107 09:54:20.663342 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 09:54:20 crc kubenswrapper[5131]: I0107 09:54:20.664097 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 09:54:20 crc kubenswrapper[5131]: I0107 09:54:20.664184 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:54:20 crc kubenswrapper[5131]: I0107 09:54:20.665224 5131 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"903008c51d00a0d816920831c3581e75cc8a3222da74d38c39c99f7e621c1add"} pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 09:54:20 crc kubenswrapper[5131]: I0107 09:54:20.665363 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" containerID="cri-o://903008c51d00a0d816920831c3581e75cc8a3222da74d38c39c99f7e621c1add" gracePeriod=600 Jan 07 09:54:20 crc kubenswrapper[5131]: I0107 09:54:20.824200 5131 generic.go:358] "Generic (PLEG): container finished" podID="3942e752-44ba-4678-8723-6cd778e60d73" containerID="903008c51d00a0d816920831c3581e75cc8a3222da74d38c39c99f7e621c1add" exitCode=0 Jan 07 09:54:20 crc kubenswrapper[5131]: I0107 09:54:20.824288 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerDied","Data":"903008c51d00a0d816920831c3581e75cc8a3222da74d38c39c99f7e621c1add"} Jan 07 09:54:21 crc kubenswrapper[5131]: I0107 09:54:21.835295 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"e79b67bc8389c68c2ac09cb38bf889a9519e79a63ac71b01c26e01c34973b2a7"} Jan 07 09:54:22 crc kubenswrapper[5131]: I0107 09:54:22.843651 5131 generic.go:358] "Generic (PLEG): container finished" podID="1697c475-b030-40da-9ed0-7884931c55fd" containerID="fbcc0b4d92087a423b14dffdae57ff1f54fa5b1109f42a876ac1080d378c4598" exitCode=0 Jan 07 09:54:22 crc kubenswrapper[5131]: I0107 09:54:22.843779 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" event={"ID":"1697c475-b030-40da-9ed0-7884931c55fd","Type":"ContainerDied","Data":"fbcc0b4d92087a423b14dffdae57ff1f54fa5b1109f42a876ac1080d378c4598"} Jan 07 09:54:22 crc kubenswrapper[5131]: I0107 09:54:22.845223 5131 scope.go:117] "RemoveContainer" containerID="fbcc0b4d92087a423b14dffdae57ff1f54fa5b1109f42a876ac1080d378c4598" Jan 07 09:54:23 crc kubenswrapper[5131]: I0107 09:54:23.166661 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:54:23 crc kubenswrapper[5131]: I0107 09:54:23.852906 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 09:54:23 crc kubenswrapper[5131]: I0107 09:54:23.855363 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 07 09:54:23 crc kubenswrapper[5131]: I0107 09:54:23.855438 5131 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="86f2e886e98414f770131acffb590ca4566dfc410f9760b1c0bbda5529368740" exitCode=137 Jan 07 09:54:23 crc kubenswrapper[5131]: I0107 09:54:23.855589 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"86f2e886e98414f770131acffb590ca4566dfc410f9760b1c0bbda5529368740"} Jan 07 09:54:23 crc kubenswrapper[5131]: I0107 09:54:23.855690 5131 scope.go:117] "RemoveContainer" containerID="81a19faef229379a9f11c9404f00a4cd033fe495e075b60878147f896005767f" Jan 07 09:54:23 crc kubenswrapper[5131]: I0107 09:54:23.865084 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" event={"ID":"1697c475-b030-40da-9ed0-7884931c55fd","Type":"ContainerStarted","Data":"5c761bf1ec3d205aecf5fb8038cc4f468a7910c6de1826cdd084deade4fc5e4a"} Jan 07 09:54:23 crc kubenswrapper[5131]: I0107 09:54:23.865606 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:54:23 crc kubenswrapper[5131]: I0107 09:54:23.869305 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:54:24 crc kubenswrapper[5131]: I0107 09:54:24.872112 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 09:54:24 crc kubenswrapper[5131]: I0107 09:54:24.874457 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d7dda8d0b8376d5557c346913b273479a8d3db8c25eb097fdfa837b2e5a0072d"} Jan 07 09:54:32 crc kubenswrapper[5131]: I0107 09:54:32.367974 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 09:54:32 crc kubenswrapper[5131]: I0107 09:54:32.371028 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 09:54:32 crc kubenswrapper[5131]: I0107 09:54:32.969980 5131 ???:1] "http: TLS handshake error from 192.168.126.11:46496: no serving certificate available for the kubelet" Jan 07 09:54:33 crc kubenswrapper[5131]: I0107 09:54:33.716165 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:54:33 crc kubenswrapper[5131]: I0107 09:54:33.724205 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:54:33 crc kubenswrapper[5131]: I0107 09:54:33.934559 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:54:34 crc kubenswrapper[5131]: I0107 09:54:34.946320 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.413028 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp"] Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.413853 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" podUID="6497dc94-29dd-4d24-8a87-6721b752e8d3" containerName="route-controller-manager" containerID="cri-o://ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70" gracePeriod=30 Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.417575 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pssml"] Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.417959 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" podUID="4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" containerName="controller-manager" containerID="cri-o://cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699" gracePeriod=30 Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.844772 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.848618 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.869993 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg"] Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.870806 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6497dc94-29dd-4d24-8a87-6721b752e8d3" containerName="route-controller-manager" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.870929 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="6497dc94-29dd-4d24-8a87-6721b752e8d3" containerName="route-controller-manager" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.871016 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" containerName="controller-manager" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.871082 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" containerName="controller-manager" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.871161 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.871223 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.871294 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2af676b7-b75c-4dae-98d9-9caa20f87c9b" containerName="installer" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.871358 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af676b7-b75c-4dae-98d9-9caa20f87c9b" containerName="installer" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.871523 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" containerName="controller-manager" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.871606 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="2af676b7-b75c-4dae-98d9-9caa20f87c9b" containerName="installer" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.871666 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.871720 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="6497dc94-29dd-4d24-8a87-6721b752e8d3" containerName="route-controller-manager" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.878620 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.890452 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg"] Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.903886 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-client-ca\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.903931 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4xd8\" (UniqueName: \"kubernetes.io/projected/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-kube-api-access-z4xd8\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.903965 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-serving-cert\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.904006 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-config\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.904047 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-tmp\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.914970 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf"] Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.919811 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.920290 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf"] Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.992093 5131 generic.go:358] "Generic (PLEG): container finished" podID="6497dc94-29dd-4d24-8a87-6721b752e8d3" containerID="ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70" exitCode=0 Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.992444 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" event={"ID":"6497dc94-29dd-4d24-8a87-6721b752e8d3","Type":"ContainerDied","Data":"ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70"} Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.992477 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" event={"ID":"6497dc94-29dd-4d24-8a87-6721b752e8d3","Type":"ContainerDied","Data":"86300854854aae4982657befbafbb976d01490cb0157920835be7edfe0b908c1"} Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.992499 5131 scope.go:117] "RemoveContainer" containerID="ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70" Jan 07 09:54:41 crc kubenswrapper[5131]: I0107 09:54:41.992888 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.003149 5131 generic.go:358] "Generic (PLEG): container finished" podID="4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" containerID="cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699" exitCode=0 Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.003233 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" event={"ID":"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4","Type":"ContainerDied","Data":"cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699"} Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.003275 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" event={"ID":"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4","Type":"ContainerDied","Data":"d9cd023397ab571a5d5a47de7628c6766a688163bf51e0372674ce9a937fc5a2"} Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.003343 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-pssml" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.006587 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-config\") pod \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.006656 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv8dz\" (UniqueName: \"kubernetes.io/projected/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-kube-api-access-sv8dz\") pod \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.007878 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6497dc94-29dd-4d24-8a87-6721b752e8d3-serving-cert\") pod \"6497dc94-29dd-4d24-8a87-6721b752e8d3\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.007977 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-config\") pod \"6497dc94-29dd-4d24-8a87-6721b752e8d3\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.008094 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6497dc94-29dd-4d24-8a87-6721b752e8d3-tmp\") pod \"6497dc94-29dd-4d24-8a87-6721b752e8d3\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.008151 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-config" (OuterVolumeSpecName: "config") pod "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" (UID: "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.008603 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-client-ca\") pod \"6497dc94-29dd-4d24-8a87-6721b752e8d3\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.008782 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6497dc94-29dd-4d24-8a87-6721b752e8d3-tmp" (OuterVolumeSpecName: "tmp") pod "6497dc94-29dd-4d24-8a87-6721b752e8d3" (UID: "6497dc94-29dd-4d24-8a87-6721b752e8d3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.009143 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-serving-cert\") pod \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.009310 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gj97\" (UniqueName: \"kubernetes.io/projected/6497dc94-29dd-4d24-8a87-6721b752e8d3-kube-api-access-5gj97\") pod \"6497dc94-29dd-4d24-8a87-6721b752e8d3\" (UID: \"6497dc94-29dd-4d24-8a87-6721b752e8d3\") " Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.009378 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-proxy-ca-bundles\") pod \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.009457 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-tmp\") pod \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.009523 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-client-ca" (OuterVolumeSpecName: "client-ca") pod "6497dc94-29dd-4d24-8a87-6721b752e8d3" (UID: "6497dc94-29dd-4d24-8a87-6721b752e8d3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.009547 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-config" (OuterVolumeSpecName: "config") pod "6497dc94-29dd-4d24-8a87-6721b752e8d3" (UID: "6497dc94-29dd-4d24-8a87-6721b752e8d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.009625 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-client-ca\") pod \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\" (UID: \"4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4\") " Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.009854 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-client-ca\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.013089 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-client-ca\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.010886 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-tmp" (OuterVolumeSpecName: "tmp") pod "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" (UID: "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.010925 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" (UID: "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.012852 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" (UID: "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.013718 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4xd8\" (UniqueName: \"kubernetes.io/projected/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-kube-api-access-z4xd8\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.013813 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-serving-cert\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.014595 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-config\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.014680 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-tmp\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.014953 5131 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.014967 5131 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-tmp\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.014977 5131 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.014989 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.014998 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.015008 5131 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6497dc94-29dd-4d24-8a87-6721b752e8d3-tmp\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.015017 5131 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6497dc94-29dd-4d24-8a87-6721b752e8d3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.015389 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-tmp\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.022454 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-config\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.022695 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6497dc94-29dd-4d24-8a87-6721b752e8d3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6497dc94-29dd-4d24-8a87-6721b752e8d3" (UID: "6497dc94-29dd-4d24-8a87-6721b752e8d3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.022954 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" (UID: "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.023312 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-serving-cert\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.024213 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6497dc94-29dd-4d24-8a87-6721b752e8d3-kube-api-access-5gj97" (OuterVolumeSpecName: "kube-api-access-5gj97") pod "6497dc94-29dd-4d24-8a87-6721b752e8d3" (UID: "6497dc94-29dd-4d24-8a87-6721b752e8d3"). InnerVolumeSpecName "kube-api-access-5gj97". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.027032 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-kube-api-access-sv8dz" (OuterVolumeSpecName: "kube-api-access-sv8dz") pod "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" (UID: "4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4"). InnerVolumeSpecName "kube-api-access-sv8dz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.029032 5131 scope.go:117] "RemoveContainer" containerID="ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70" Jan 07 09:54:42 crc kubenswrapper[5131]: E0107 09:54:42.029566 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70\": container with ID starting with ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70 not found: ID does not exist" containerID="ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.029624 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70"} err="failed to get container status \"ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70\": rpc error: code = NotFound desc = could not find container \"ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70\": container with ID starting with ffe98d04ef4fa878834fd4bb1f5f8699886e95a8302947b778ace88b87235c70 not found: ID does not exist" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.029668 5131 scope.go:117] "RemoveContainer" containerID="cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.040347 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4xd8\" (UniqueName: \"kubernetes.io/projected/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-kube-api-access-z4xd8\") pod \"route-controller-manager-68bf8c9dc6-v5wjg\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.045676 5131 scope.go:117] "RemoveContainer" containerID="cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699" Jan 07 09:54:42 crc kubenswrapper[5131]: E0107 09:54:42.046193 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699\": container with ID starting with cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699 not found: ID does not exist" containerID="cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.046272 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699"} err="failed to get container status \"cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699\": rpc error: code = NotFound desc = could not find container \"cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699\": container with ID starting with cd484f176832a1aef66355bbd41c7daa9a28b2978cd79d337835f9fea32a1699 not found: ID does not exist" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.115670 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcvrn\" (UniqueName: \"kubernetes.io/projected/bdaedf2e-f212-4019-bb34-e5e80537f8d3-kube-api-access-lcvrn\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.115735 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdaedf2e-f212-4019-bb34-e5e80537f8d3-tmp\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.115805 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdaedf2e-f212-4019-bb34-e5e80537f8d3-config\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.116010 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdaedf2e-f212-4019-bb34-e5e80537f8d3-proxy-ca-bundles\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.116147 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdaedf2e-f212-4019-bb34-e5e80537f8d3-client-ca\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.116175 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdaedf2e-f212-4019-bb34-e5e80537f8d3-serving-cert\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.116965 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.116989 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5gj97\" (UniqueName: \"kubernetes.io/projected/6497dc94-29dd-4d24-8a87-6721b752e8d3-kube-api-access-5gj97\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.117002 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sv8dz\" (UniqueName: \"kubernetes.io/projected/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4-kube-api-access-sv8dz\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.117036 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6497dc94-29dd-4d24-8a87-6721b752e8d3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.210357 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.217993 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdaedf2e-f212-4019-bb34-e5e80537f8d3-config\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.218040 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdaedf2e-f212-4019-bb34-e5e80537f8d3-proxy-ca-bundles\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.218075 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdaedf2e-f212-4019-bb34-e5e80537f8d3-client-ca\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.218093 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdaedf2e-f212-4019-bb34-e5e80537f8d3-serving-cert\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.218151 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcvrn\" (UniqueName: \"kubernetes.io/projected/bdaedf2e-f212-4019-bb34-e5e80537f8d3-kube-api-access-lcvrn\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.218180 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdaedf2e-f212-4019-bb34-e5e80537f8d3-tmp\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.218684 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdaedf2e-f212-4019-bb34-e5e80537f8d3-tmp\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.221084 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdaedf2e-f212-4019-bb34-e5e80537f8d3-client-ca\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.221803 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdaedf2e-f212-4019-bb34-e5e80537f8d3-proxy-ca-bundles\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.223208 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdaedf2e-f212-4019-bb34-e5e80537f8d3-config\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.224317 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdaedf2e-f212-4019-bb34-e5e80537f8d3-serving-cert\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.248180 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcvrn\" (UniqueName: \"kubernetes.io/projected/bdaedf2e-f212-4019-bb34-e5e80537f8d3-kube-api-access-lcvrn\") pod \"controller-manager-66bd7f6cf-8hvbf\" (UID: \"bdaedf2e-f212-4019-bb34-e5e80537f8d3\") " pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.321619 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp"] Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.328924 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-nhgrp"] Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.338902 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pssml"] Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.341683 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pssml"] Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.466456 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg"] Jan 07 09:54:42 crc kubenswrapper[5131]: W0107 09:54:42.477542 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1adb0cf4_b0b3_4daa_b488_76c1fc919c29.slice/crio-088b1728ba969f571df4c55ddec28e48fad6872f5d5a22d1e7d549cb2cf796e2 WatchSource:0}: Error finding container 088b1728ba969f571df4c55ddec28e48fad6872f5d5a22d1e7d549cb2cf796e2: Status 404 returned error can't find the container with id 088b1728ba969f571df4c55ddec28e48fad6872f5d5a22d1e7d549cb2cf796e2 Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.480771 5131 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.539398 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:42 crc kubenswrapper[5131]: I0107 09:54:42.781802 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf"] Jan 07 09:54:42 crc kubenswrapper[5131]: W0107 09:54:42.800026 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdaedf2e_f212_4019_bb34_e5e80537f8d3.slice/crio-68ea19821f4db5af3fe24c3600829e23f06499cae3e1d677fe3299db70e4960c WatchSource:0}: Error finding container 68ea19821f4db5af3fe24c3600829e23f06499cae3e1d677fe3299db70e4960c: Status 404 returned error can't find the container with id 68ea19821f4db5af3fe24c3600829e23f06499cae3e1d677fe3299db70e4960c Jan 07 09:54:43 crc kubenswrapper[5131]: I0107 09:54:43.013906 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" event={"ID":"1adb0cf4-b0b3-4daa-b488-76c1fc919c29","Type":"ContainerStarted","Data":"3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2"} Jan 07 09:54:43 crc kubenswrapper[5131]: I0107 09:54:43.014016 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:43 crc kubenswrapper[5131]: I0107 09:54:43.014046 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" event={"ID":"1adb0cf4-b0b3-4daa-b488-76c1fc919c29","Type":"ContainerStarted","Data":"088b1728ba969f571df4c55ddec28e48fad6872f5d5a22d1e7d549cb2cf796e2"} Jan 07 09:54:43 crc kubenswrapper[5131]: I0107 09:54:43.018196 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" event={"ID":"bdaedf2e-f212-4019-bb34-e5e80537f8d3","Type":"ContainerStarted","Data":"8453727d3d578097202a2756699c75ef687df72985558072fc1a24b3a19f32bd"} Jan 07 09:54:43 crc kubenswrapper[5131]: I0107 09:54:43.018274 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" event={"ID":"bdaedf2e-f212-4019-bb34-e5e80537f8d3","Type":"ContainerStarted","Data":"68ea19821f4db5af3fe24c3600829e23f06499cae3e1d677fe3299db70e4960c"} Jan 07 09:54:43 crc kubenswrapper[5131]: I0107 09:54:43.018302 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:43 crc kubenswrapper[5131]: I0107 09:54:43.051268 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" podStartSLOduration=2.051253966 podStartE2EDuration="2.051253966s" podCreationTimestamp="2026-01-07 09:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:54:43.03808687 +0000 UTC m=+311.204388474" watchObservedRunningTime="2026-01-07 09:54:43.051253966 +0000 UTC m=+311.217555530" Jan 07 09:54:43 crc kubenswrapper[5131]: I0107 09:54:43.051360 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" podStartSLOduration=2.051356719 podStartE2EDuration="2.051356719s" podCreationTimestamp="2026-01-07 09:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:54:43.050161592 +0000 UTC m=+311.216463206" watchObservedRunningTime="2026-01-07 09:54:43.051356719 +0000 UTC m=+311.217658283" Jan 07 09:54:43 crc kubenswrapper[5131]: I0107 09:54:43.173421 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:54:43 crc kubenswrapper[5131]: I0107 09:54:43.713330 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66bd7f6cf-8hvbf" Jan 07 09:54:44 crc kubenswrapper[5131]: I0107 09:54:44.194070 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4" path="/var/lib/kubelet/pods/4ed74fd2-0cf1-49e6-8c40-4d7bfd9de1c4/volumes" Jan 07 09:54:44 crc kubenswrapper[5131]: I0107 09:54:44.195655 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6497dc94-29dd-4d24-8a87-6721b752e8d3" path="/var/lib/kubelet/pods/6497dc94-29dd-4d24-8a87-6721b752e8d3/volumes" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.287430 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg"] Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.288225 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" podUID="1adb0cf4-b0b3-4daa-b488-76c1fc919c29" containerName="route-controller-manager" containerID="cri-o://3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2" gracePeriod=30 Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.769348 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.810161 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89"] Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.811093 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1adb0cf4-b0b3-4daa-b488-76c1fc919c29" containerName="route-controller-manager" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.811121 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1adb0cf4-b0b3-4daa-b488-76c1fc919c29" containerName="route-controller-manager" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.811266 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="1adb0cf4-b0b3-4daa-b488-76c1fc919c29" containerName="route-controller-manager" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.815626 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89"] Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.815762 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.928452 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-tmp\") pod \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.928590 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4xd8\" (UniqueName: \"kubernetes.io/projected/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-kube-api-access-z4xd8\") pod \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.928725 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-config\") pod \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.928750 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-client-ca\") pod \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.928780 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-serving-cert\") pod \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\" (UID: \"1adb0cf4-b0b3-4daa-b488-76c1fc919c29\") " Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.928922 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1423c2d5-367e-45a5-beb2-0643f2af6bb8-client-ca\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.928964 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1423c2d5-367e-45a5-beb2-0643f2af6bb8-tmp\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.928999 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsbfn\" (UniqueName: \"kubernetes.io/projected/1423c2d5-367e-45a5-beb2-0643f2af6bb8-kube-api-access-rsbfn\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.929077 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-tmp" (OuterVolumeSpecName: "tmp") pod "1adb0cf4-b0b3-4daa-b488-76c1fc919c29" (UID: "1adb0cf4-b0b3-4daa-b488-76c1fc919c29"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.929099 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1423c2d5-367e-45a5-beb2-0643f2af6bb8-config\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.929165 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1423c2d5-367e-45a5-beb2-0643f2af6bb8-serving-cert\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.929218 5131 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-tmp\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.930306 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-client-ca" (OuterVolumeSpecName: "client-ca") pod "1adb0cf4-b0b3-4daa-b488-76c1fc919c29" (UID: "1adb0cf4-b0b3-4daa-b488-76c1fc919c29"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.930347 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-config" (OuterVolumeSpecName: "config") pod "1adb0cf4-b0b3-4daa-b488-76c1fc919c29" (UID: "1adb0cf4-b0b3-4daa-b488-76c1fc919c29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.935118 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-kube-api-access-z4xd8" (OuterVolumeSpecName: "kube-api-access-z4xd8") pod "1adb0cf4-b0b3-4daa-b488-76c1fc919c29" (UID: "1adb0cf4-b0b3-4daa-b488-76c1fc919c29"). InnerVolumeSpecName "kube-api-access-z4xd8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:55:11 crc kubenswrapper[5131]: I0107 09:55:11.936337 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1adb0cf4-b0b3-4daa-b488-76c1fc919c29" (UID: "1adb0cf4-b0b3-4daa-b488-76c1fc919c29"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.030798 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1423c2d5-367e-45a5-beb2-0643f2af6bb8-config\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.030884 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1423c2d5-367e-45a5-beb2-0643f2af6bb8-serving-cert\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.030921 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1423c2d5-367e-45a5-beb2-0643f2af6bb8-client-ca\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.030950 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1423c2d5-367e-45a5-beb2-0643f2af6bb8-tmp\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.030981 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rsbfn\" (UniqueName: \"kubernetes.io/projected/1423c2d5-367e-45a5-beb2-0643f2af6bb8-kube-api-access-rsbfn\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.031031 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z4xd8\" (UniqueName: \"kubernetes.io/projected/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-kube-api-access-z4xd8\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.031044 5131 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-config\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.031056 5131 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-client-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.031067 5131 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1adb0cf4-b0b3-4daa-b488-76c1fc919c29-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.032565 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1423c2d5-367e-45a5-beb2-0643f2af6bb8-tmp\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.032736 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1423c2d5-367e-45a5-beb2-0643f2af6bb8-config\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.033420 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1423c2d5-367e-45a5-beb2-0643f2af6bb8-client-ca\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.037482 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1423c2d5-367e-45a5-beb2-0643f2af6bb8-serving-cert\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.060787 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsbfn\" (UniqueName: \"kubernetes.io/projected/1423c2d5-367e-45a5-beb2-0643f2af6bb8-kube-api-access-rsbfn\") pod \"route-controller-manager-7479b7d7f8-8mp89\" (UID: \"1423c2d5-367e-45a5-beb2-0643f2af6bb8\") " pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.145750 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.318181 5131 generic.go:358] "Generic (PLEG): container finished" podID="1adb0cf4-b0b3-4daa-b488-76c1fc919c29" containerID="3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2" exitCode=0 Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.318340 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.318371 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" event={"ID":"1adb0cf4-b0b3-4daa-b488-76c1fc919c29","Type":"ContainerDied","Data":"3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2"} Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.319710 5131 scope.go:117] "RemoveContainer" containerID="3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.319551 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg" event={"ID":"1adb0cf4-b0b3-4daa-b488-76c1fc919c29","Type":"ContainerDied","Data":"088b1728ba969f571df4c55ddec28e48fad6872f5d5a22d1e7d549cb2cf796e2"} Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.349576 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg"] Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.351156 5131 scope.go:117] "RemoveContainer" containerID="3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2" Jan 07 09:55:12 crc kubenswrapper[5131]: E0107 09:55:12.352306 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2\": container with ID starting with 3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2 not found: ID does not exist" containerID="3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.352405 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2"} err="failed to get container status \"3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2\": rpc error: code = NotFound desc = could not find container \"3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2\": container with ID starting with 3a9c185d5a4540aa9d3d2e53cf3b82c04ce9473f2c4cd468744fe58ecc28b1b2 not found: ID does not exist" Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.361178 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68bf8c9dc6-v5wjg"] Jan 07 09:55:12 crc kubenswrapper[5131]: I0107 09:55:12.685084 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89"] Jan 07 09:55:13 crc kubenswrapper[5131]: I0107 09:55:13.329251 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" event={"ID":"1423c2d5-367e-45a5-beb2-0643f2af6bb8","Type":"ContainerStarted","Data":"58731c564d893ea2b00f4befb841a3c2216ee828f2c9f6b30a9a1d170ae01358"} Jan 07 09:55:13 crc kubenswrapper[5131]: I0107 09:55:13.329661 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:13 crc kubenswrapper[5131]: I0107 09:55:13.329678 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" event={"ID":"1423c2d5-367e-45a5-beb2-0643f2af6bb8","Type":"ContainerStarted","Data":"5be0594e2886d3b08324b7f432379ea3c77001596ceb443477f8c5c64389eaf1"} Jan 07 09:55:13 crc kubenswrapper[5131]: I0107 09:55:13.363173 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" podStartSLOduration=2.363148184 podStartE2EDuration="2.363148184s" podCreationTimestamp="2026-01-07 09:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:55:13.352672356 +0000 UTC m=+341.518973970" watchObservedRunningTime="2026-01-07 09:55:13.363148184 +0000 UTC m=+341.529449758" Jan 07 09:55:13 crc kubenswrapper[5131]: I0107 09:55:13.482264 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7479b7d7f8-8mp89" Jan 07 09:55:14 crc kubenswrapper[5131]: I0107 09:55:14.186934 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1adb0cf4-b0b3-4daa-b488-76c1fc919c29" path="/var/lib/kubelet/pods/1adb0cf4-b0b3-4daa-b488-76c1fc919c29/volumes" Jan 07 09:55:34 crc kubenswrapper[5131]: I0107 09:55:34.006467 5131 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.627351 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9wkb"] Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.628487 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l9wkb" podUID="b8cac87e-c013-4988-a977-5b1f038c1d34" containerName="registry-server" containerID="cri-o://eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768" gracePeriod=30 Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.636010 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-db2q2"] Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.636339 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-db2q2" podUID="07c53e69-8037-4261-a288-5f4505e6f7e5" containerName="registry-server" containerID="cri-o://be639cc6e215a1653e1882ac31c810cdc436d6d4900641afe82fc02c9e461a7a" gracePeriod=30 Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.652057 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mrfk7"] Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.652805 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" podUID="1697c475-b030-40da-9ed0-7884931c55fd" containerName="marketplace-operator" containerID="cri-o://5c761bf1ec3d205aecf5fb8038cc4f468a7910c6de1826cdd084deade4fc5e4a" gracePeriod=30 Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.669550 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8cf2"] Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.670201 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t8cf2" podUID="f04172ba-2c1f-4d8f-b742-7d182136ca81" containerName="registry-server" containerID="cri-o://774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62" gracePeriod=30 Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.681029 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbvmr"] Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.681352 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cbvmr" podUID="4a44502e-cd8c-4525-95f6-33c1eab86d42" containerName="registry-server" containerID="cri-o://3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2" gracePeriod=30 Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.689499 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-x86wx"] Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.782000 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-x86wx"] Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.782250 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.887655 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d853fb7e-12e8-4060-849f-428cc2b6e85f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.887715 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d853fb7e-12e8-4060-849f-428cc2b6e85f-tmp\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.887865 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d853fb7e-12e8-4060-849f-428cc2b6e85f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.887906 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9nhw\" (UniqueName: \"kubernetes.io/projected/d853fb7e-12e8-4060-849f-428cc2b6e85f-kube-api-access-p9nhw\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.989163 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d853fb7e-12e8-4060-849f-428cc2b6e85f-tmp\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.989505 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d853fb7e-12e8-4060-849f-428cc2b6e85f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.989535 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p9nhw\" (UniqueName: \"kubernetes.io/projected/d853fb7e-12e8-4060-849f-428cc2b6e85f-kube-api-access-p9nhw\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.989590 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d853fb7e-12e8-4060-849f-428cc2b6e85f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.991381 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d853fb7e-12e8-4060-849f-428cc2b6e85f-tmp\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.992273 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d853fb7e-12e8-4060-849f-428cc2b6e85f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:54 crc kubenswrapper[5131]: I0107 09:55:54.996441 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d853fb7e-12e8-4060-849f-428cc2b6e85f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.006708 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9nhw\" (UniqueName: \"kubernetes.io/projected/d853fb7e-12e8-4060-849f-428cc2b6e85f-kube-api-access-p9nhw\") pod \"marketplace-operator-547dbd544d-x86wx\" (UID: \"d853fb7e-12e8-4060-849f-428cc2b6e85f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.156144 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.160264 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.165649 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.172193 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.195358 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-utilities\") pod \"f04172ba-2c1f-4d8f-b742-7d182136ca81\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.195416 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-utilities\") pod \"b8cac87e-c013-4988-a977-5b1f038c1d34\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.195446 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-catalog-content\") pod \"b8cac87e-c013-4988-a977-5b1f038c1d34\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.195485 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn5wz\" (UniqueName: \"kubernetes.io/projected/4a44502e-cd8c-4525-95f6-33c1eab86d42-kube-api-access-qn5wz\") pod \"4a44502e-cd8c-4525-95f6-33c1eab86d42\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.195561 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvnnv\" (UniqueName: \"kubernetes.io/projected/f04172ba-2c1f-4d8f-b742-7d182136ca81-kube-api-access-fvnnv\") pod \"f04172ba-2c1f-4d8f-b742-7d182136ca81\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.195611 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-catalog-content\") pod \"f04172ba-2c1f-4d8f-b742-7d182136ca81\" (UID: \"f04172ba-2c1f-4d8f-b742-7d182136ca81\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.196734 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-utilities\") pod \"4a44502e-cd8c-4525-95f6-33c1eab86d42\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.196871 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-catalog-content\") pod \"4a44502e-cd8c-4525-95f6-33c1eab86d42\" (UID: \"4a44502e-cd8c-4525-95f6-33c1eab86d42\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.196969 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t58j6\" (UniqueName: \"kubernetes.io/projected/b8cac87e-c013-4988-a977-5b1f038c1d34-kube-api-access-t58j6\") pod \"b8cac87e-c013-4988-a977-5b1f038c1d34\" (UID: \"b8cac87e-c013-4988-a977-5b1f038c1d34\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.197026 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-utilities" (OuterVolumeSpecName: "utilities") pod "b8cac87e-c013-4988-a977-5b1f038c1d34" (UID: "b8cac87e-c013-4988-a977-5b1f038c1d34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.197182 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-utilities" (OuterVolumeSpecName: "utilities") pod "f04172ba-2c1f-4d8f-b742-7d182136ca81" (UID: "f04172ba-2c1f-4d8f-b742-7d182136ca81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.197576 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.197647 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.198278 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-utilities" (OuterVolumeSpecName: "utilities") pod "4a44502e-cd8c-4525-95f6-33c1eab86d42" (UID: "4a44502e-cd8c-4525-95f6-33c1eab86d42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.200552 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a44502e-cd8c-4525-95f6-33c1eab86d42-kube-api-access-qn5wz" (OuterVolumeSpecName: "kube-api-access-qn5wz") pod "4a44502e-cd8c-4525-95f6-33c1eab86d42" (UID: "4a44502e-cd8c-4525-95f6-33c1eab86d42"). InnerVolumeSpecName "kube-api-access-qn5wz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.201976 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04172ba-2c1f-4d8f-b742-7d182136ca81-kube-api-access-fvnnv" (OuterVolumeSpecName: "kube-api-access-fvnnv") pod "f04172ba-2c1f-4d8f-b742-7d182136ca81" (UID: "f04172ba-2c1f-4d8f-b742-7d182136ca81"). InnerVolumeSpecName "kube-api-access-fvnnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.209884 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8cac87e-c013-4988-a977-5b1f038c1d34-kube-api-access-t58j6" (OuterVolumeSpecName: "kube-api-access-t58j6") pod "b8cac87e-c013-4988-a977-5b1f038c1d34" (UID: "b8cac87e-c013-4988-a977-5b1f038c1d34"). InnerVolumeSpecName "kube-api-access-t58j6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.213497 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f04172ba-2c1f-4d8f-b742-7d182136ca81" (UID: "f04172ba-2c1f-4d8f-b742-7d182136ca81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.233639 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8cac87e-c013-4988-a977-5b1f038c1d34" (UID: "b8cac87e-c013-4988-a977-5b1f038c1d34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.300116 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04172ba-2c1f-4d8f-b742-7d182136ca81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.300607 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.300622 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t58j6\" (UniqueName: \"kubernetes.io/projected/b8cac87e-c013-4988-a977-5b1f038c1d34-kube-api-access-t58j6\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.300637 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8cac87e-c013-4988-a977-5b1f038c1d34-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.300649 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qn5wz\" (UniqueName: \"kubernetes.io/projected/4a44502e-cd8c-4525-95f6-33c1eab86d42-kube-api-access-qn5wz\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.300661 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fvnnv\" (UniqueName: \"kubernetes.io/projected/f04172ba-2c1f-4d8f-b742-7d182136ca81-kube-api-access-fvnnv\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.314496 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a44502e-cd8c-4525-95f6-33c1eab86d42" (UID: "4a44502e-cd8c-4525-95f6-33c1eab86d42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.402489 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a44502e-cd8c-4525-95f6-33c1eab86d42-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.587086 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-x86wx"] Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.594023 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" event={"ID":"d853fb7e-12e8-4060-849f-428cc2b6e85f","Type":"ContainerStarted","Data":"6ca9fe54e1420451c83dc875f1234e09c6d1f833db7dea3463f73dc38daf860e"} Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.596940 5131 generic.go:358] "Generic (PLEG): container finished" podID="1697c475-b030-40da-9ed0-7884931c55fd" containerID="5c761bf1ec3d205aecf5fb8038cc4f468a7910c6de1826cdd084deade4fc5e4a" exitCode=0 Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.597129 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" event={"ID":"1697c475-b030-40da-9ed0-7884931c55fd","Type":"ContainerDied","Data":"5c761bf1ec3d205aecf5fb8038cc4f468a7910c6de1826cdd084deade4fc5e4a"} Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.597218 5131 scope.go:117] "RemoveContainer" containerID="fbcc0b4d92087a423b14dffdae57ff1f54fa5b1109f42a876ac1080d378c4598" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.602322 5131 generic.go:358] "Generic (PLEG): container finished" podID="b8cac87e-c013-4988-a977-5b1f038c1d34" containerID="eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768" exitCode=0 Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.602383 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9wkb" event={"ID":"b8cac87e-c013-4988-a977-5b1f038c1d34","Type":"ContainerDied","Data":"eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768"} Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.602411 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9wkb" event={"ID":"b8cac87e-c013-4988-a977-5b1f038c1d34","Type":"ContainerDied","Data":"1aeed343e3056936b6813c508d456224884975f42dee11f7088d113bf0ee41f5"} Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.602491 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9wkb" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.606096 5131 generic.go:358] "Generic (PLEG): container finished" podID="07c53e69-8037-4261-a288-5f4505e6f7e5" containerID="be639cc6e215a1653e1882ac31c810cdc436d6d4900641afe82fc02c9e461a7a" exitCode=0 Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.606261 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db2q2" event={"ID":"07c53e69-8037-4261-a288-5f4505e6f7e5","Type":"ContainerDied","Data":"be639cc6e215a1653e1882ac31c810cdc436d6d4900641afe82fc02c9e461a7a"} Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.608578 5131 generic.go:358] "Generic (PLEG): container finished" podID="f04172ba-2c1f-4d8f-b742-7d182136ca81" containerID="774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62" exitCode=0 Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.608630 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8cf2" event={"ID":"f04172ba-2c1f-4d8f-b742-7d182136ca81","Type":"ContainerDied","Data":"774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62"} Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.609051 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8cf2" event={"ID":"f04172ba-2c1f-4d8f-b742-7d182136ca81","Type":"ContainerDied","Data":"7a86a6f94af62877e2287b221ea9790c68625c7acc3df28a5d96a835a46a6896"} Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.608665 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8cf2" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.611802 5131 generic.go:358] "Generic (PLEG): container finished" podID="4a44502e-cd8c-4525-95f6-33c1eab86d42" containerID="3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2" exitCode=0 Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.611937 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbvmr" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.611949 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbvmr" event={"ID":"4a44502e-cd8c-4525-95f6-33c1eab86d42","Type":"ContainerDied","Data":"3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2"} Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.612097 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbvmr" event={"ID":"4a44502e-cd8c-4525-95f6-33c1eab86d42","Type":"ContainerDied","Data":"a27245926c6a1e08507e3bda75330e68e6e1aef76cc3a09361cf0a01359b6ee9"} Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.618537 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.647672 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.649794 5131 scope.go:117] "RemoveContainer" containerID="eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.672631 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9wkb"] Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.678757 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l9wkb"] Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.686463 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8cf2"] Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.689810 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8cf2"] Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.693020 5131 scope.go:117] "RemoveContainer" containerID="d45e857e0b41ab6a86a0b5757498c0e5d38b6578dc1175c6829879a56ab43d39" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.698689 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbvmr"] Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.703267 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cbvmr"] Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.704922 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcj8g\" (UniqueName: \"kubernetes.io/projected/07c53e69-8037-4261-a288-5f4505e6f7e5-kube-api-access-bcj8g\") pod \"07c53e69-8037-4261-a288-5f4505e6f7e5\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.704984 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-catalog-content\") pod \"07c53e69-8037-4261-a288-5f4505e6f7e5\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.705031 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1697c475-b030-40da-9ed0-7884931c55fd-tmp\") pod \"1697c475-b030-40da-9ed0-7884931c55fd\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.705092 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-trusted-ca\") pod \"1697c475-b030-40da-9ed0-7884931c55fd\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.705124 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-utilities\") pod \"07c53e69-8037-4261-a288-5f4505e6f7e5\" (UID: \"07c53e69-8037-4261-a288-5f4505e6f7e5\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.705172 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-operator-metrics\") pod \"1697c475-b030-40da-9ed0-7884931c55fd\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.705215 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2qkb\" (UniqueName: \"kubernetes.io/projected/1697c475-b030-40da-9ed0-7884931c55fd-kube-api-access-h2qkb\") pod \"1697c475-b030-40da-9ed0-7884931c55fd\" (UID: \"1697c475-b030-40da-9ed0-7884931c55fd\") " Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.706302 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1697c475-b030-40da-9ed0-7884931c55fd-tmp" (OuterVolumeSpecName: "tmp") pod "1697c475-b030-40da-9ed0-7884931c55fd" (UID: "1697c475-b030-40da-9ed0-7884931c55fd"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.706474 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1697c475-b030-40da-9ed0-7884931c55fd" (UID: "1697c475-b030-40da-9ed0-7884931c55fd"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.706943 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-utilities" (OuterVolumeSpecName: "utilities") pod "07c53e69-8037-4261-a288-5f4505e6f7e5" (UID: "07c53e69-8037-4261-a288-5f4505e6f7e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.710770 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c53e69-8037-4261-a288-5f4505e6f7e5-kube-api-access-bcj8g" (OuterVolumeSpecName: "kube-api-access-bcj8g") pod "07c53e69-8037-4261-a288-5f4505e6f7e5" (UID: "07c53e69-8037-4261-a288-5f4505e6f7e5"). InnerVolumeSpecName "kube-api-access-bcj8g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.711448 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1697c475-b030-40da-9ed0-7884931c55fd" (UID: "1697c475-b030-40da-9ed0-7884931c55fd"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.713780 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1697c475-b030-40da-9ed0-7884931c55fd-kube-api-access-h2qkb" (OuterVolumeSpecName: "kube-api-access-h2qkb") pod "1697c475-b030-40da-9ed0-7884931c55fd" (UID: "1697c475-b030-40da-9ed0-7884931c55fd"). InnerVolumeSpecName "kube-api-access-h2qkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.719816 5131 scope.go:117] "RemoveContainer" containerID="3cc8e8df055fc6b9073a8c04748d0999b2ec523d927ac26e4118eaa9c9b3ab2f" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.761771 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07c53e69-8037-4261-a288-5f4505e6f7e5" (UID: "07c53e69-8037-4261-a288-5f4505e6f7e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.765348 5131 scope.go:117] "RemoveContainer" containerID="eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768" Jan 07 09:55:55 crc kubenswrapper[5131]: E0107 09:55:55.765984 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768\": container with ID starting with eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768 not found: ID does not exist" containerID="eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.766027 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768"} err="failed to get container status \"eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768\": rpc error: code = NotFound desc = could not find container \"eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768\": container with ID starting with eec4911518d01074a957a1978a2e38447897ebd56c38cb3d25bbb8c71d4e3768 not found: ID does not exist" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.766051 5131 scope.go:117] "RemoveContainer" containerID="d45e857e0b41ab6a86a0b5757498c0e5d38b6578dc1175c6829879a56ab43d39" Jan 07 09:55:55 crc kubenswrapper[5131]: E0107 09:55:55.768060 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45e857e0b41ab6a86a0b5757498c0e5d38b6578dc1175c6829879a56ab43d39\": container with ID starting with d45e857e0b41ab6a86a0b5757498c0e5d38b6578dc1175c6829879a56ab43d39 not found: ID does not exist" containerID="d45e857e0b41ab6a86a0b5757498c0e5d38b6578dc1175c6829879a56ab43d39" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.768089 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45e857e0b41ab6a86a0b5757498c0e5d38b6578dc1175c6829879a56ab43d39"} err="failed to get container status \"d45e857e0b41ab6a86a0b5757498c0e5d38b6578dc1175c6829879a56ab43d39\": rpc error: code = NotFound desc = could not find container \"d45e857e0b41ab6a86a0b5757498c0e5d38b6578dc1175c6829879a56ab43d39\": container with ID starting with d45e857e0b41ab6a86a0b5757498c0e5d38b6578dc1175c6829879a56ab43d39 not found: ID does not exist" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.768110 5131 scope.go:117] "RemoveContainer" containerID="3cc8e8df055fc6b9073a8c04748d0999b2ec523d927ac26e4118eaa9c9b3ab2f" Jan 07 09:55:55 crc kubenswrapper[5131]: E0107 09:55:55.768776 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cc8e8df055fc6b9073a8c04748d0999b2ec523d927ac26e4118eaa9c9b3ab2f\": container with ID starting with 3cc8e8df055fc6b9073a8c04748d0999b2ec523d927ac26e4118eaa9c9b3ab2f not found: ID does not exist" containerID="3cc8e8df055fc6b9073a8c04748d0999b2ec523d927ac26e4118eaa9c9b3ab2f" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.768797 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc8e8df055fc6b9073a8c04748d0999b2ec523d927ac26e4118eaa9c9b3ab2f"} err="failed to get container status \"3cc8e8df055fc6b9073a8c04748d0999b2ec523d927ac26e4118eaa9c9b3ab2f\": rpc error: code = NotFound desc = could not find container \"3cc8e8df055fc6b9073a8c04748d0999b2ec523d927ac26e4118eaa9c9b3ab2f\": container with ID starting with 3cc8e8df055fc6b9073a8c04748d0999b2ec523d927ac26e4118eaa9c9b3ab2f not found: ID does not exist" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.768810 5131 scope.go:117] "RemoveContainer" containerID="774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.783954 5131 scope.go:117] "RemoveContainer" containerID="04fb37854152d7df41852802494ff8288e68728d61c83b4be0b010264cfb19f0" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.806807 5131 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.806866 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.806880 5131 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1697c475-b030-40da-9ed0-7884931c55fd-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.806891 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2qkb\" (UniqueName: \"kubernetes.io/projected/1697c475-b030-40da-9ed0-7884931c55fd-kube-api-access-h2qkb\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.806905 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bcj8g\" (UniqueName: \"kubernetes.io/projected/07c53e69-8037-4261-a288-5f4505e6f7e5-kube-api-access-bcj8g\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.806916 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c53e69-8037-4261-a288-5f4505e6f7e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.806927 5131 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1697c475-b030-40da-9ed0-7884931c55fd-tmp\") on node \"crc\" DevicePath \"\"" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.833426 5131 scope.go:117] "RemoveContainer" containerID="9df5116a0ef973141e408ef8795906b3c0e76c85c20aafd2db2d6b6be77d3cc3" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.849614 5131 scope.go:117] "RemoveContainer" containerID="774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62" Jan 07 09:55:55 crc kubenswrapper[5131]: E0107 09:55:55.850238 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62\": container with ID starting with 774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62 not found: ID does not exist" containerID="774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.850281 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62"} err="failed to get container status \"774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62\": rpc error: code = NotFound desc = could not find container \"774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62\": container with ID starting with 774fc91737b83234ad759d33e7acab61ae58477c940f31e59fe8c208b89b4f62 not found: ID does not exist" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.850307 5131 scope.go:117] "RemoveContainer" containerID="04fb37854152d7df41852802494ff8288e68728d61c83b4be0b010264cfb19f0" Jan 07 09:55:55 crc kubenswrapper[5131]: E0107 09:55:55.850607 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04fb37854152d7df41852802494ff8288e68728d61c83b4be0b010264cfb19f0\": container with ID starting with 04fb37854152d7df41852802494ff8288e68728d61c83b4be0b010264cfb19f0 not found: ID does not exist" containerID="04fb37854152d7df41852802494ff8288e68728d61c83b4be0b010264cfb19f0" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.850678 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fb37854152d7df41852802494ff8288e68728d61c83b4be0b010264cfb19f0"} err="failed to get container status \"04fb37854152d7df41852802494ff8288e68728d61c83b4be0b010264cfb19f0\": rpc error: code = NotFound desc = could not find container \"04fb37854152d7df41852802494ff8288e68728d61c83b4be0b010264cfb19f0\": container with ID starting with 04fb37854152d7df41852802494ff8288e68728d61c83b4be0b010264cfb19f0 not found: ID does not exist" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.850762 5131 scope.go:117] "RemoveContainer" containerID="9df5116a0ef973141e408ef8795906b3c0e76c85c20aafd2db2d6b6be77d3cc3" Jan 07 09:55:55 crc kubenswrapper[5131]: E0107 09:55:55.851530 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9df5116a0ef973141e408ef8795906b3c0e76c85c20aafd2db2d6b6be77d3cc3\": container with ID starting with 9df5116a0ef973141e408ef8795906b3c0e76c85c20aafd2db2d6b6be77d3cc3 not found: ID does not exist" containerID="9df5116a0ef973141e408ef8795906b3c0e76c85c20aafd2db2d6b6be77d3cc3" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.851571 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9df5116a0ef973141e408ef8795906b3c0e76c85c20aafd2db2d6b6be77d3cc3"} err="failed to get container status \"9df5116a0ef973141e408ef8795906b3c0e76c85c20aafd2db2d6b6be77d3cc3\": rpc error: code = NotFound desc = could not find container \"9df5116a0ef973141e408ef8795906b3c0e76c85c20aafd2db2d6b6be77d3cc3\": container with ID starting with 9df5116a0ef973141e408ef8795906b3c0e76c85c20aafd2db2d6b6be77d3cc3 not found: ID does not exist" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.851592 5131 scope.go:117] "RemoveContainer" containerID="3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.868373 5131 scope.go:117] "RemoveContainer" containerID="0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.883876 5131 scope.go:117] "RemoveContainer" containerID="7a38fc70b7401d7e94adba227ab6bc98f69b66e6adc85a42bf9f6e689c257a5a" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.897079 5131 scope.go:117] "RemoveContainer" containerID="3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2" Jan 07 09:55:55 crc kubenswrapper[5131]: E0107 09:55:55.905071 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2\": container with ID starting with 3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2 not found: ID does not exist" containerID="3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.905105 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2"} err="failed to get container status \"3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2\": rpc error: code = NotFound desc = could not find container \"3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2\": container with ID starting with 3f5a555c406d013a60947ffa73e963e1b2ff5a23540d4b6de8d228bfcad205c2 not found: ID does not exist" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.905128 5131 scope.go:117] "RemoveContainer" containerID="0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097" Jan 07 09:55:55 crc kubenswrapper[5131]: E0107 09:55:55.905413 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097\": container with ID starting with 0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097 not found: ID does not exist" containerID="0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.905440 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097"} err="failed to get container status \"0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097\": rpc error: code = NotFound desc = could not find container \"0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097\": container with ID starting with 0f6c5a485691c047eb929831f7b652fb9a61e0f67f693427d7303a8c325c3097 not found: ID does not exist" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.905492 5131 scope.go:117] "RemoveContainer" containerID="7a38fc70b7401d7e94adba227ab6bc98f69b66e6adc85a42bf9f6e689c257a5a" Jan 07 09:55:55 crc kubenswrapper[5131]: E0107 09:55:55.907014 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a38fc70b7401d7e94adba227ab6bc98f69b66e6adc85a42bf9f6e689c257a5a\": container with ID starting with 7a38fc70b7401d7e94adba227ab6bc98f69b66e6adc85a42bf9f6e689c257a5a not found: ID does not exist" containerID="7a38fc70b7401d7e94adba227ab6bc98f69b66e6adc85a42bf9f6e689c257a5a" Jan 07 09:55:55 crc kubenswrapper[5131]: I0107 09:55:55.907043 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a38fc70b7401d7e94adba227ab6bc98f69b66e6adc85a42bf9f6e689c257a5a"} err="failed to get container status \"7a38fc70b7401d7e94adba227ab6bc98f69b66e6adc85a42bf9f6e689c257a5a\": rpc error: code = NotFound desc = could not find container \"7a38fc70b7401d7e94adba227ab6bc98f69b66e6adc85a42bf9f6e689c257a5a\": container with ID starting with 7a38fc70b7401d7e94adba227ab6bc98f69b66e6adc85a42bf9f6e689c257a5a not found: ID does not exist" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.188637 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a44502e-cd8c-4525-95f6-33c1eab86d42" path="/var/lib/kubelet/pods/4a44502e-cd8c-4525-95f6-33c1eab86d42/volumes" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.189434 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8cac87e-c013-4988-a977-5b1f038c1d34" path="/var/lib/kubelet/pods/b8cac87e-c013-4988-a977-5b1f038c1d34/volumes" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.190202 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f04172ba-2c1f-4d8f-b742-7d182136ca81" path="/var/lib/kubelet/pods/f04172ba-2c1f-4d8f-b742-7d182136ca81/volumes" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.620347 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" event={"ID":"d853fb7e-12e8-4060-849f-428cc2b6e85f","Type":"ContainerStarted","Data":"1eef7ebf0f26b0a19d6c836bb1d35e96b4ad1b8018156e84c8339e4bcb3c95f2"} Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.620795 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.623271 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" event={"ID":"1697c475-b030-40da-9ed0-7884931c55fd","Type":"ContainerDied","Data":"1655b270d631c55f0b08f813293cd9fc1c3d3f116e210eb4b789357e15ce5728"} Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.623335 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mrfk7" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.623339 5131 scope.go:117] "RemoveContainer" containerID="5c761bf1ec3d205aecf5fb8038cc4f468a7910c6de1826cdd084deade4fc5e4a" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.625585 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.628491 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-db2q2" event={"ID":"07c53e69-8037-4261-a288-5f4505e6f7e5","Type":"ContainerDied","Data":"60e976610e387b1508ad0a187e081731c921de8bf370d4a976a0583047b8e088"} Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.628515 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-db2q2" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.641434 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-x86wx" podStartSLOduration=2.64142325 podStartE2EDuration="2.64142325s" podCreationTimestamp="2026-01-07 09:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:55:56.639515546 +0000 UTC m=+384.805817150" watchObservedRunningTime="2026-01-07 09:55:56.64142325 +0000 UTC m=+384.807724824" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.657085 5131 scope.go:117] "RemoveContainer" containerID="be639cc6e215a1653e1882ac31c810cdc436d6d4900641afe82fc02c9e461a7a" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.663196 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-db2q2"] Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.668052 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-db2q2"] Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.682791 5131 scope.go:117] "RemoveContainer" containerID="8392a1a78fda39698fcbcbfb30761721a4e5f331f86588c4082e97b6ba8c5083" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.717458 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mrfk7"] Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.717978 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mrfk7"] Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.730079 5131 scope.go:117] "RemoveContainer" containerID="697b3d6414dd3ec4cce467df2b3ccd3a0f454800e497b97f5e1c4df4bdf4b8b4" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.832482 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-js8mt"] Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.833329 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8cac87e-c013-4988-a977-5b1f038c1d34" containerName="extract-content" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.833431 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cac87e-c013-4988-a977-5b1f038c1d34" containerName="extract-content" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.833519 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07c53e69-8037-4261-a288-5f4505e6f7e5" containerName="extract-content" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.833598 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c53e69-8037-4261-a288-5f4505e6f7e5" containerName="extract-content" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.833677 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8cac87e-c013-4988-a977-5b1f038c1d34" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.833752 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cac87e-c013-4988-a977-5b1f038c1d34" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.833846 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f04172ba-2c1f-4d8f-b742-7d182136ca81" containerName="extract-utilities" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.833929 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04172ba-2c1f-4d8f-b742-7d182136ca81" containerName="extract-utilities" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834022 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1697c475-b030-40da-9ed0-7884931c55fd" containerName="marketplace-operator" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834099 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1697c475-b030-40da-9ed0-7884931c55fd" containerName="marketplace-operator" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834174 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4a44502e-cd8c-4525-95f6-33c1eab86d42" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834255 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a44502e-cd8c-4525-95f6-33c1eab86d42" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834336 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4a44502e-cd8c-4525-95f6-33c1eab86d42" containerName="extract-utilities" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834409 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a44502e-cd8c-4525-95f6-33c1eab86d42" containerName="extract-utilities" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834489 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07c53e69-8037-4261-a288-5f4505e6f7e5" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834561 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c53e69-8037-4261-a288-5f4505e6f7e5" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834641 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f04172ba-2c1f-4d8f-b742-7d182136ca81" containerName="extract-content" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834716 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04172ba-2c1f-4d8f-b742-7d182136ca81" containerName="extract-content" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834820 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f04172ba-2c1f-4d8f-b742-7d182136ca81" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.834921 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04172ba-2c1f-4d8f-b742-7d182136ca81" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.835008 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8cac87e-c013-4988-a977-5b1f038c1d34" containerName="extract-utilities" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.835081 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cac87e-c013-4988-a977-5b1f038c1d34" containerName="extract-utilities" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.835154 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4a44502e-cd8c-4525-95f6-33c1eab86d42" containerName="extract-content" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.835226 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a44502e-cd8c-4525-95f6-33c1eab86d42" containerName="extract-content" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.835299 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07c53e69-8037-4261-a288-5f4505e6f7e5" containerName="extract-utilities" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.835478 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c53e69-8037-4261-a288-5f4505e6f7e5" containerName="extract-utilities" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.835641 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="07c53e69-8037-4261-a288-5f4505e6f7e5" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.835769 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8cac87e-c013-4988-a977-5b1f038c1d34" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.837784 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="4a44502e-cd8c-4525-95f6-33c1eab86d42" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.837918 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="1697c475-b030-40da-9ed0-7884931c55fd" containerName="marketplace-operator" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.837995 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="f04172ba-2c1f-4d8f-b742-7d182136ca81" containerName="registry-server" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.838072 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="1697c475-b030-40da-9ed0-7884931c55fd" containerName="marketplace-operator" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.838294 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1697c475-b030-40da-9ed0-7884931c55fd" containerName="marketplace-operator" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.838377 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1697c475-b030-40da-9ed0-7884931c55fd" containerName="marketplace-operator" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.847911 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-js8mt"] Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.848119 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.851326 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.922212 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4abda25d-f804-4184-a568-5c0fa0263526-catalog-content\") pod \"certified-operators-js8mt\" (UID: \"4abda25d-f804-4184-a568-5c0fa0263526\") " pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.922270 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4abda25d-f804-4184-a568-5c0fa0263526-utilities\") pod \"certified-operators-js8mt\" (UID: \"4abda25d-f804-4184-a568-5c0fa0263526\") " pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:55:56 crc kubenswrapper[5131]: I0107 09:55:56.922332 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdjhm\" (UniqueName: \"kubernetes.io/projected/4abda25d-f804-4184-a568-5c0fa0263526-kube-api-access-fdjhm\") pod \"certified-operators-js8mt\" (UID: \"4abda25d-f804-4184-a568-5c0fa0263526\") " pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.023385 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4abda25d-f804-4184-a568-5c0fa0263526-catalog-content\") pod \"certified-operators-js8mt\" (UID: \"4abda25d-f804-4184-a568-5c0fa0263526\") " pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.023429 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4abda25d-f804-4184-a568-5c0fa0263526-utilities\") pod \"certified-operators-js8mt\" (UID: \"4abda25d-f804-4184-a568-5c0fa0263526\") " pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.023493 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fdjhm\" (UniqueName: \"kubernetes.io/projected/4abda25d-f804-4184-a568-5c0fa0263526-kube-api-access-fdjhm\") pod \"certified-operators-js8mt\" (UID: \"4abda25d-f804-4184-a568-5c0fa0263526\") " pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.024200 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4abda25d-f804-4184-a568-5c0fa0263526-utilities\") pod \"certified-operators-js8mt\" (UID: \"4abda25d-f804-4184-a568-5c0fa0263526\") " pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.033782 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4abda25d-f804-4184-a568-5c0fa0263526-catalog-content\") pod \"certified-operators-js8mt\" (UID: \"4abda25d-f804-4184-a568-5c0fa0263526\") " pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.038546 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bgfp5"] Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.044853 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.050226 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.055137 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgfp5"] Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.057155 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdjhm\" (UniqueName: \"kubernetes.io/projected/4abda25d-f804-4184-a568-5c0fa0263526-kube-api-access-fdjhm\") pod \"certified-operators-js8mt\" (UID: \"4abda25d-f804-4184-a568-5c0fa0263526\") " pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.124583 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbd5c\" (UniqueName: \"kubernetes.io/projected/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-kube-api-access-wbd5c\") pod \"redhat-marketplace-bgfp5\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.124624 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-catalog-content\") pod \"redhat-marketplace-bgfp5\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.124664 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-utilities\") pod \"redhat-marketplace-bgfp5\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.166932 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.225649 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wbd5c\" (UniqueName: \"kubernetes.io/projected/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-kube-api-access-wbd5c\") pod \"redhat-marketplace-bgfp5\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.225692 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-catalog-content\") pod \"redhat-marketplace-bgfp5\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.225725 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-utilities\") pod \"redhat-marketplace-bgfp5\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.226177 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-utilities\") pod \"redhat-marketplace-bgfp5\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.226312 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-catalog-content\") pod \"redhat-marketplace-bgfp5\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.243146 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbd5c\" (UniqueName: \"kubernetes.io/projected/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-kube-api-access-wbd5c\") pod \"redhat-marketplace-bgfp5\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.380803 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.551053 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-js8mt"] Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.575736 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgfp5"] Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.642522 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgfp5" event={"ID":"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1","Type":"ContainerStarted","Data":"a6460152d9ea77915d43d9297d3cbba5884190b8b4882c9a9d6593180c408994"} Jan 07 09:55:57 crc kubenswrapper[5131]: I0107 09:55:57.643643 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-js8mt" event={"ID":"4abda25d-f804-4184-a568-5c0fa0263526","Type":"ContainerStarted","Data":"b70edfd5341070c1d9a0dbb24c9bd730b99e6f4b4f660bae4051940f44f88b90"} Jan 07 09:55:58 crc kubenswrapper[5131]: I0107 09:55:58.187468 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c53e69-8037-4261-a288-5f4505e6f7e5" path="/var/lib/kubelet/pods/07c53e69-8037-4261-a288-5f4505e6f7e5/volumes" Jan 07 09:55:58 crc kubenswrapper[5131]: I0107 09:55:58.188696 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1697c475-b030-40da-9ed0-7884931c55fd" path="/var/lib/kubelet/pods/1697c475-b030-40da-9ed0-7884931c55fd/volumes" Jan 07 09:55:58 crc kubenswrapper[5131]: I0107 09:55:58.655042 5131 generic.go:358] "Generic (PLEG): container finished" podID="4abda25d-f804-4184-a568-5c0fa0263526" containerID="a22bcbf60f70958449f88cd9f1fed04611aa891e73fd01c3fb828324287e038a" exitCode=0 Jan 07 09:55:58 crc kubenswrapper[5131]: I0107 09:55:58.655079 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-js8mt" event={"ID":"4abda25d-f804-4184-a568-5c0fa0263526","Type":"ContainerDied","Data":"a22bcbf60f70958449f88cd9f1fed04611aa891e73fd01c3fb828324287e038a"} Jan 07 09:55:58 crc kubenswrapper[5131]: I0107 09:55:58.656857 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgfp5" event={"ID":"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1","Type":"ContainerDied","Data":"4456ff8cf1f24e802046ef4a3e0e3e25a06e65314293465ded2da49bc4528cac"} Jan 07 09:55:58 crc kubenswrapper[5131]: I0107 09:55:58.656872 5131 generic.go:358] "Generic (PLEG): container finished" podID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" containerID="4456ff8cf1f24e802046ef4a3e0e3e25a06e65314293465ded2da49bc4528cac" exitCode=0 Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.241706 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9km69"] Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.251519 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9km69" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.260251 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9km69"] Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.296999 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.351656 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vzc9\" (UniqueName: \"kubernetes.io/projected/04a6ed00-35a6-41aa-a83a-f388fabdec33-kube-api-access-2vzc9\") pod \"community-operators-9km69\" (UID: \"04a6ed00-35a6-41aa-a83a-f388fabdec33\") " pod="openshift-marketplace/community-operators-9km69" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.351712 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a6ed00-35a6-41aa-a83a-f388fabdec33-catalog-content\") pod \"community-operators-9km69\" (UID: \"04a6ed00-35a6-41aa-a83a-f388fabdec33\") " pod="openshift-marketplace/community-operators-9km69" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.351775 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a6ed00-35a6-41aa-a83a-f388fabdec33-utilities\") pod \"community-operators-9km69\" (UID: \"04a6ed00-35a6-41aa-a83a-f388fabdec33\") " pod="openshift-marketplace/community-operators-9km69" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.434401 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7wdqb"] Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.442631 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7wdqb"] Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.442788 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.444950 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.452686 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2vzc9\" (UniqueName: \"kubernetes.io/projected/04a6ed00-35a6-41aa-a83a-f388fabdec33-kube-api-access-2vzc9\") pod \"community-operators-9km69\" (UID: \"04a6ed00-35a6-41aa-a83a-f388fabdec33\") " pod="openshift-marketplace/community-operators-9km69" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.452779 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a6ed00-35a6-41aa-a83a-f388fabdec33-catalog-content\") pod \"community-operators-9km69\" (UID: \"04a6ed00-35a6-41aa-a83a-f388fabdec33\") " pod="openshift-marketplace/community-operators-9km69" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.452871 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a6ed00-35a6-41aa-a83a-f388fabdec33-utilities\") pod \"community-operators-9km69\" (UID: \"04a6ed00-35a6-41aa-a83a-f388fabdec33\") " pod="openshift-marketplace/community-operators-9km69" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.453235 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a6ed00-35a6-41aa-a83a-f388fabdec33-catalog-content\") pod \"community-operators-9km69\" (UID: \"04a6ed00-35a6-41aa-a83a-f388fabdec33\") " pod="openshift-marketplace/community-operators-9km69" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.457249 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a6ed00-35a6-41aa-a83a-f388fabdec33-utilities\") pod \"community-operators-9km69\" (UID: \"04a6ed00-35a6-41aa-a83a-f388fabdec33\") " pod="openshift-marketplace/community-operators-9km69" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.474310 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vzc9\" (UniqueName: \"kubernetes.io/projected/04a6ed00-35a6-41aa-a83a-f388fabdec33-kube-api-access-2vzc9\") pod \"community-operators-9km69\" (UID: \"04a6ed00-35a6-41aa-a83a-f388fabdec33\") " pod="openshift-marketplace/community-operators-9km69" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.553441 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8sm8\" (UniqueName: \"kubernetes.io/projected/adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31-kube-api-access-l8sm8\") pod \"redhat-operators-7wdqb\" (UID: \"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31\") " pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.553730 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31-utilities\") pod \"redhat-operators-7wdqb\" (UID: \"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31\") " pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.553824 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31-catalog-content\") pod \"redhat-operators-7wdqb\" (UID: \"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31\") " pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.655104 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l8sm8\" (UniqueName: \"kubernetes.io/projected/adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31-kube-api-access-l8sm8\") pod \"redhat-operators-7wdqb\" (UID: \"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31\") " pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.655176 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31-utilities\") pod \"redhat-operators-7wdqb\" (UID: \"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31\") " pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.655197 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31-catalog-content\") pod \"redhat-operators-7wdqb\" (UID: \"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31\") " pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.655520 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31-utilities\") pod \"redhat-operators-7wdqb\" (UID: \"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31\") " pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.655542 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9km69" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.656951 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31-catalog-content\") pod \"redhat-operators-7wdqb\" (UID: \"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31\") " pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.666015 5131 generic.go:358] "Generic (PLEG): container finished" podID="4abda25d-f804-4184-a568-5c0fa0263526" containerID="3e68f8590bad81e9516346455a371818dde04c9cda35a018bde34b764c5e4747" exitCode=0 Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.666136 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-js8mt" event={"ID":"4abda25d-f804-4184-a568-5c0fa0263526","Type":"ContainerDied","Data":"3e68f8590bad81e9516346455a371818dde04c9cda35a018bde34b764c5e4747"} Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.673758 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8sm8\" (UniqueName: \"kubernetes.io/projected/adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31-kube-api-access-l8sm8\") pod \"redhat-operators-7wdqb\" (UID: \"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31\") " pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.674712 5131 generic.go:358] "Generic (PLEG): container finished" podID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" containerID="83008bb83257163c34f6ddd6dc16f94c9a4c954250b2b7e90ce28e0b57de0555" exitCode=0 Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.674780 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgfp5" event={"ID":"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1","Type":"ContainerDied","Data":"83008bb83257163c34f6ddd6dc16f94c9a4c954250b2b7e90ce28e0b57de0555"} Jan 07 09:55:59 crc kubenswrapper[5131]: I0107 09:55:59.773036 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.062521 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9km69"] Jan 07 09:56:00 crc kubenswrapper[5131]: W0107 09:56:00.070459 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04a6ed00_35a6_41aa_a83a_f388fabdec33.slice/crio-c0dcdef9868d3f75be8340b6d974f15f3453bb0275dc5d32ad294285a61c2b2e WatchSource:0}: Error finding container c0dcdef9868d3f75be8340b6d974f15f3453bb0275dc5d32ad294285a61c2b2e: Status 404 returned error can't find the container with id c0dcdef9868d3f75be8340b6d974f15f3453bb0275dc5d32ad294285a61c2b2e Jan 07 09:56:00 crc kubenswrapper[5131]: W0107 09:56:00.186109 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc35ff8_fb6e_44fb_ad67_4ba5b89e8a31.slice/crio-58f2d4e3d79ee12f29df57577913ae5eec6cad3eea57e71dff49d8f816241020 WatchSource:0}: Error finding container 58f2d4e3d79ee12f29df57577913ae5eec6cad3eea57e71dff49d8f816241020: Status 404 returned error can't find the container with id 58f2d4e3d79ee12f29df57577913ae5eec6cad3eea57e71dff49d8f816241020 Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.187822 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7wdqb"] Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.217757 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-kg2xc"] Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.225888 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.227480 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-kg2xc"] Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.264319 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/888f775a-e343-4c8b-ab26-8358a56a84ee-registry-certificates\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.264372 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/888f775a-e343-4c8b-ab26-8358a56a84ee-trusted-ca\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.264406 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.264432 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/888f775a-e343-4c8b-ab26-8358a56a84ee-bound-sa-token\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.264456 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84bfv\" (UniqueName: \"kubernetes.io/projected/888f775a-e343-4c8b-ab26-8358a56a84ee-kube-api-access-84bfv\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.264488 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/888f775a-e343-4c8b-ab26-8358a56a84ee-registry-tls\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.264641 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/888f775a-e343-4c8b-ab26-8358a56a84ee-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.264666 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/888f775a-e343-4c8b-ab26-8358a56a84ee-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.298482 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.365266 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/888f775a-e343-4c8b-ab26-8358a56a84ee-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.365298 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/888f775a-e343-4c8b-ab26-8358a56a84ee-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.365491 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/888f775a-e343-4c8b-ab26-8358a56a84ee-registry-certificates\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.365551 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/888f775a-e343-4c8b-ab26-8358a56a84ee-trusted-ca\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.365596 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/888f775a-e343-4c8b-ab26-8358a56a84ee-bound-sa-token\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.365619 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-84bfv\" (UniqueName: \"kubernetes.io/projected/888f775a-e343-4c8b-ab26-8358a56a84ee-kube-api-access-84bfv\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.365669 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/888f775a-e343-4c8b-ab26-8358a56a84ee-registry-tls\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.365793 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/888f775a-e343-4c8b-ab26-8358a56a84ee-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.366681 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/888f775a-e343-4c8b-ab26-8358a56a84ee-trusted-ca\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.366817 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/888f775a-e343-4c8b-ab26-8358a56a84ee-registry-certificates\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.371583 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/888f775a-e343-4c8b-ab26-8358a56a84ee-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.372445 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/888f775a-e343-4c8b-ab26-8358a56a84ee-registry-tls\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.383276 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/888f775a-e343-4c8b-ab26-8358a56a84ee-bound-sa-token\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.384117 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-84bfv\" (UniqueName: \"kubernetes.io/projected/888f775a-e343-4c8b-ab26-8358a56a84ee-kube-api-access-84bfv\") pod \"image-registry-5d9d95bf5b-kg2xc\" (UID: \"888f775a-e343-4c8b-ab26-8358a56a84ee\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.543544 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.684799 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-js8mt" event={"ID":"4abda25d-f804-4184-a568-5c0fa0263526","Type":"ContainerStarted","Data":"7d5b57e0fe803125ace7833af608b62ef926ff3a3eb77457cffec3b294337479"} Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.692288 5131 generic.go:358] "Generic (PLEG): container finished" podID="04a6ed00-35a6-41aa-a83a-f388fabdec33" containerID="480bb6eb6c038ba1f5d1fc75c0e700c91d4d8dffd5741bce7f969f2831c644a7" exitCode=0 Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.692453 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9km69" event={"ID":"04a6ed00-35a6-41aa-a83a-f388fabdec33","Type":"ContainerDied","Data":"480bb6eb6c038ba1f5d1fc75c0e700c91d4d8dffd5741bce7f969f2831c644a7"} Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.692480 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9km69" event={"ID":"04a6ed00-35a6-41aa-a83a-f388fabdec33","Type":"ContainerStarted","Data":"c0dcdef9868d3f75be8340b6d974f15f3453bb0275dc5d32ad294285a61c2b2e"} Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.697726 5131 generic.go:358] "Generic (PLEG): container finished" podID="adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31" containerID="b47746eb35deafd4b1b8c6b4369333460450733953caf7794a6ab3dbb7012683" exitCode=0 Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.697916 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wdqb" event={"ID":"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31","Type":"ContainerDied","Data":"b47746eb35deafd4b1b8c6b4369333460450733953caf7794a6ab3dbb7012683"} Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.697946 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wdqb" event={"ID":"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31","Type":"ContainerStarted","Data":"58f2d4e3d79ee12f29df57577913ae5eec6cad3eea57e71dff49d8f816241020"} Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.735173 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-js8mt" podStartSLOduration=4.038715299 podStartE2EDuration="4.735152326s" podCreationTimestamp="2026-01-07 09:55:56 +0000 UTC" firstStartedPulling="2026-01-07 09:55:58.656230911 +0000 UTC m=+386.822532475" lastFinishedPulling="2026-01-07 09:55:59.352667918 +0000 UTC m=+387.518969502" observedRunningTime="2026-01-07 09:56:00.709381143 +0000 UTC m=+388.875682707" watchObservedRunningTime="2026-01-07 09:56:00.735152326 +0000 UTC m=+388.901453890" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.742934 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgfp5" event={"ID":"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1","Type":"ContainerStarted","Data":"296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5"} Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.778742 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bgfp5" podStartSLOduration=3.205855012 podStartE2EDuration="3.778727745s" podCreationTimestamp="2026-01-07 09:55:57 +0000 UTC" firstStartedPulling="2026-01-07 09:55:58.657707083 +0000 UTC m=+386.824008647" lastFinishedPulling="2026-01-07 09:55:59.230579816 +0000 UTC m=+387.396881380" observedRunningTime="2026-01-07 09:56:00.775466453 +0000 UTC m=+388.941768017" watchObservedRunningTime="2026-01-07 09:56:00.778727745 +0000 UTC m=+388.945029309" Jan 07 09:56:00 crc kubenswrapper[5131]: I0107 09:56:00.984059 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-kg2xc"] Jan 07 09:56:01 crc kubenswrapper[5131]: I0107 09:56:01.749047 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" event={"ID":"888f775a-e343-4c8b-ab26-8358a56a84ee","Type":"ContainerStarted","Data":"6e5dd876e1aebeb5c4bdc82fc23ffb58360f35dff489edbd3e6a45afc5fca507"} Jan 07 09:56:01 crc kubenswrapper[5131]: I0107 09:56:01.749326 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" event={"ID":"888f775a-e343-4c8b-ab26-8358a56a84ee","Type":"ContainerStarted","Data":"622ddac10506b8814542418d27ef88fdf745b3089724f6992f3e4d8d70b434a1"} Jan 07 09:56:01 crc kubenswrapper[5131]: I0107 09:56:01.749442 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:01 crc kubenswrapper[5131]: I0107 09:56:01.752194 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wdqb" event={"ID":"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31","Type":"ContainerStarted","Data":"2eb9204339f191c5a2f42805721224d374ff5ae16b8b57c28af4060d90b53fcc"} Jan 07 09:56:01 crc kubenswrapper[5131]: I0107 09:56:01.780390 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" podStartSLOduration=1.780373332 podStartE2EDuration="1.780373332s" podCreationTimestamp="2026-01-07 09:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 09:56:01.779349552 +0000 UTC m=+389.945651116" watchObservedRunningTime="2026-01-07 09:56:01.780373332 +0000 UTC m=+389.946674896" Jan 07 09:56:02 crc kubenswrapper[5131]: I0107 09:56:02.764975 5131 generic.go:358] "Generic (PLEG): container finished" podID="04a6ed00-35a6-41aa-a83a-f388fabdec33" containerID="465eaf5b6e7b48fee0a045c22f52b21f0246fe246d50a5a84674911061fb3746" exitCode=0 Jan 07 09:56:02 crc kubenswrapper[5131]: I0107 09:56:02.765005 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9km69" event={"ID":"04a6ed00-35a6-41aa-a83a-f388fabdec33","Type":"ContainerDied","Data":"465eaf5b6e7b48fee0a045c22f52b21f0246fe246d50a5a84674911061fb3746"} Jan 07 09:56:02 crc kubenswrapper[5131]: I0107 09:56:02.770098 5131 generic.go:358] "Generic (PLEG): container finished" podID="adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31" containerID="2eb9204339f191c5a2f42805721224d374ff5ae16b8b57c28af4060d90b53fcc" exitCode=0 Jan 07 09:56:02 crc kubenswrapper[5131]: I0107 09:56:02.770769 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wdqb" event={"ID":"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31","Type":"ContainerDied","Data":"2eb9204339f191c5a2f42805721224d374ff5ae16b8b57c28af4060d90b53fcc"} Jan 07 09:56:03 crc kubenswrapper[5131]: I0107 09:56:03.777976 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9km69" event={"ID":"04a6ed00-35a6-41aa-a83a-f388fabdec33","Type":"ContainerStarted","Data":"421fc5c75635d0b7f7e44d494437bb07cfe1f058b0da5e3972ae6de01d404637"} Jan 07 09:56:03 crc kubenswrapper[5131]: I0107 09:56:03.780494 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wdqb" event={"ID":"adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31","Type":"ContainerStarted","Data":"507624a5b476bd5ac41328b4dad75bcab019e76df2bc2a3d84bd344cd9c939fd"} Jan 07 09:56:03 crc kubenswrapper[5131]: I0107 09:56:03.795996 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9km69" podStartSLOduration=3.737994587 podStartE2EDuration="4.795978496s" podCreationTimestamp="2026-01-07 09:55:59 +0000 UTC" firstStartedPulling="2026-01-07 09:56:00.693203793 +0000 UTC m=+388.859505357" lastFinishedPulling="2026-01-07 09:56:01.751187692 +0000 UTC m=+389.917489266" observedRunningTime="2026-01-07 09:56:03.795449291 +0000 UTC m=+391.961750875" watchObservedRunningTime="2026-01-07 09:56:03.795978496 +0000 UTC m=+391.962280060" Jan 07 09:56:03 crc kubenswrapper[5131]: I0107 09:56:03.812236 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7wdqb" podStartSLOduration=3.922448724 podStartE2EDuration="4.812220988s" podCreationTimestamp="2026-01-07 09:55:59 +0000 UTC" firstStartedPulling="2026-01-07 09:56:00.698610357 +0000 UTC m=+388.864911921" lastFinishedPulling="2026-01-07 09:56:01.588382621 +0000 UTC m=+389.754684185" observedRunningTime="2026-01-07 09:56:03.808479572 +0000 UTC m=+391.974781146" watchObservedRunningTime="2026-01-07 09:56:03.812220988 +0000 UTC m=+391.978522552" Jan 07 09:56:07 crc kubenswrapper[5131]: I0107 09:56:07.167078 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:56:07 crc kubenswrapper[5131]: I0107 09:56:07.167440 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:56:07 crc kubenswrapper[5131]: I0107 09:56:07.207431 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:56:07 crc kubenswrapper[5131]: I0107 09:56:07.381947 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:56:07 crc kubenswrapper[5131]: I0107 09:56:07.382276 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:56:07 crc kubenswrapper[5131]: I0107 09:56:07.426695 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:56:07 crc kubenswrapper[5131]: I0107 09:56:07.841714 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-js8mt" Jan 07 09:56:07 crc kubenswrapper[5131]: I0107 09:56:07.842412 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 09:56:09 crc kubenswrapper[5131]: I0107 09:56:09.663898 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9km69" Jan 07 09:56:09 crc kubenswrapper[5131]: I0107 09:56:09.664539 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9km69" Jan 07 09:56:09 crc kubenswrapper[5131]: I0107 09:56:09.745168 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9km69" Jan 07 09:56:09 crc kubenswrapper[5131]: I0107 09:56:09.773547 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:56:09 crc kubenswrapper[5131]: I0107 09:56:09.773593 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:56:09 crc kubenswrapper[5131]: I0107 09:56:09.817340 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:56:09 crc kubenswrapper[5131]: I0107 09:56:09.869329 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7wdqb" Jan 07 09:56:09 crc kubenswrapper[5131]: I0107 09:56:09.872323 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9km69" Jan 07 09:56:20 crc kubenswrapper[5131]: I0107 09:56:20.663595 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 09:56:20 crc kubenswrapper[5131]: I0107 09:56:20.664212 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 09:56:22 crc kubenswrapper[5131]: I0107 09:56:22.903689 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-kg2xc" Jan 07 09:56:22 crc kubenswrapper[5131]: I0107 09:56:22.964825 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-bc9f4"] Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.010437 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" podUID="9e92757e-cc25-48a6-a774-5c2a8a281576" containerName="registry" containerID="cri-o://3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71" gracePeriod=30 Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.492458 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.669953 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkr9d\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-kube-api-access-jkr9d\") pod \"9e92757e-cc25-48a6-a774-5c2a8a281576\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.670025 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e92757e-cc25-48a6-a774-5c2a8a281576-ca-trust-extracted\") pod \"9e92757e-cc25-48a6-a774-5c2a8a281576\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.670069 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e92757e-cc25-48a6-a774-5c2a8a281576-installation-pull-secrets\") pod \"9e92757e-cc25-48a6-a774-5c2a8a281576\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.670128 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-trusted-ca\") pod \"9e92757e-cc25-48a6-a774-5c2a8a281576\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.670171 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-bound-sa-token\") pod \"9e92757e-cc25-48a6-a774-5c2a8a281576\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.670222 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-certificates\") pod \"9e92757e-cc25-48a6-a774-5c2a8a281576\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.670295 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-tls\") pod \"9e92757e-cc25-48a6-a774-5c2a8a281576\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.670459 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e92757e-cc25-48a6-a774-5c2a8a281576\" (UID: \"9e92757e-cc25-48a6-a774-5c2a8a281576\") " Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.672502 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e92757e-cc25-48a6-a774-5c2a8a281576" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.674067 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e92757e-cc25-48a6-a774-5c2a8a281576" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.680007 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e92757e-cc25-48a6-a774-5c2a8a281576-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e92757e-cc25-48a6-a774-5c2a8a281576" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.684243 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "9e92757e-cc25-48a6-a774-5c2a8a281576" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.686583 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e92757e-cc25-48a6-a774-5c2a8a281576" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.687161 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-kube-api-access-jkr9d" (OuterVolumeSpecName: "kube-api-access-jkr9d") pod "9e92757e-cc25-48a6-a774-5c2a8a281576" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576"). InnerVolumeSpecName "kube-api-access-jkr9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.687826 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e92757e-cc25-48a6-a774-5c2a8a281576" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.696087 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e92757e-cc25-48a6-a774-5c2a8a281576-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e92757e-cc25-48a6-a774-5c2a8a281576" (UID: "9e92757e-cc25-48a6-a774-5c2a8a281576"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.771654 5131 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e92757e-cc25-48a6-a774-5c2a8a281576-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.771707 5131 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e92757e-cc25-48a6-a774-5c2a8a281576-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.771722 5131 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.771731 5131 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.771743 5131 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.771752 5131 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 07 09:56:48 crc kubenswrapper[5131]: I0107 09:56:48.771765 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jkr9d\" (UniqueName: \"kubernetes.io/projected/9e92757e-cc25-48a6-a774-5c2a8a281576-kube-api-access-jkr9d\") on node \"crc\" DevicePath \"\"" Jan 07 09:56:49 crc kubenswrapper[5131]: I0107 09:56:49.088066 5131 generic.go:358] "Generic (PLEG): container finished" podID="9e92757e-cc25-48a6-a774-5c2a8a281576" containerID="3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71" exitCode=0 Jan 07 09:56:49 crc kubenswrapper[5131]: I0107 09:56:49.088144 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" event={"ID":"9e92757e-cc25-48a6-a774-5c2a8a281576","Type":"ContainerDied","Data":"3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71"} Jan 07 09:56:49 crc kubenswrapper[5131]: I0107 09:56:49.089204 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" event={"ID":"9e92757e-cc25-48a6-a774-5c2a8a281576","Type":"ContainerDied","Data":"9e25bd48d055a2e3323cd6e0ca4f9c51f50ef1827794f04f7cfa1498a001f2af"} Jan 07 09:56:49 crc kubenswrapper[5131]: I0107 09:56:49.089252 5131 scope.go:117] "RemoveContainer" containerID="3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71" Jan 07 09:56:49 crc kubenswrapper[5131]: I0107 09:56:49.088209 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-bc9f4" Jan 07 09:56:49 crc kubenswrapper[5131]: I0107 09:56:49.118385 5131 scope.go:117] "RemoveContainer" containerID="3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71" Jan 07 09:56:49 crc kubenswrapper[5131]: E0107 09:56:49.119191 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71\": container with ID starting with 3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71 not found: ID does not exist" containerID="3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71" Jan 07 09:56:49 crc kubenswrapper[5131]: I0107 09:56:49.119243 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71"} err="failed to get container status \"3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71\": rpc error: code = NotFound desc = could not find container \"3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71\": container with ID starting with 3b3aa65566a1147d227d73e1d2550b85e5a401d79085e5d1c810aacca9416c71 not found: ID does not exist" Jan 07 09:56:49 crc kubenswrapper[5131]: I0107 09:56:49.133159 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-bc9f4"] Jan 07 09:56:49 crc kubenswrapper[5131]: I0107 09:56:49.138095 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-bc9f4"] Jan 07 09:56:50 crc kubenswrapper[5131]: I0107 09:56:50.191210 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e92757e-cc25-48a6-a774-5c2a8a281576" path="/var/lib/kubelet/pods/9e92757e-cc25-48a6-a774-5c2a8a281576/volumes" Jan 07 09:56:50 crc kubenswrapper[5131]: I0107 09:56:50.663497 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 09:56:50 crc kubenswrapper[5131]: I0107 09:56:50.663936 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 09:57:20 crc kubenswrapper[5131]: I0107 09:57:20.663118 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 09:57:20 crc kubenswrapper[5131]: I0107 09:57:20.663779 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 09:57:20 crc kubenswrapper[5131]: I0107 09:57:20.663896 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 09:57:20 crc kubenswrapper[5131]: I0107 09:57:20.664814 5131 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e79b67bc8389c68c2ac09cb38bf889a9519e79a63ac71b01c26e01c34973b2a7"} pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 09:57:20 crc kubenswrapper[5131]: I0107 09:57:20.664985 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" containerID="cri-o://e79b67bc8389c68c2ac09cb38bf889a9519e79a63ac71b01c26e01c34973b2a7" gracePeriod=600 Jan 07 09:57:21 crc kubenswrapper[5131]: I0107 09:57:21.308673 5131 generic.go:358] "Generic (PLEG): container finished" podID="3942e752-44ba-4678-8723-6cd778e60d73" containerID="e79b67bc8389c68c2ac09cb38bf889a9519e79a63ac71b01c26e01c34973b2a7" exitCode=0 Jan 07 09:57:21 crc kubenswrapper[5131]: I0107 09:57:21.308782 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerDied","Data":"e79b67bc8389c68c2ac09cb38bf889a9519e79a63ac71b01c26e01c34973b2a7"} Jan 07 09:57:21 crc kubenswrapper[5131]: I0107 09:57:21.310429 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"95b2f2f38ab6b9d142bf531750364a1f6ffccfcd46ca5680da77d1d639a07cbc"} Jan 07 09:57:21 crc kubenswrapper[5131]: I0107 09:57:21.310469 5131 scope.go:117] "RemoveContainer" containerID="903008c51d00a0d816920831c3581e75cc8a3222da74d38c39c99f7e621c1add" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.189269 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29462998-sj8hm"] Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.190348 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e92757e-cc25-48a6-a774-5c2a8a281576" containerName="registry" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.190365 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e92757e-cc25-48a6-a774-5c2a8a281576" containerName="registry" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.190501 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="9e92757e-cc25-48a6-a774-5c2a8a281576" containerName="registry" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.206654 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29462998-sj8hm"] Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.206795 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29462998-sj8hm" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.209333 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.210352 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.211346 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.303347 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sq97\" (UniqueName: \"kubernetes.io/projected/1c2319e5-acee-42d1-8d43-b3bddb18f996-kube-api-access-7sq97\") pod \"auto-csr-approver-29462998-sj8hm\" (UID: \"1c2319e5-acee-42d1-8d43-b3bddb18f996\") " pod="openshift-infra/auto-csr-approver-29462998-sj8hm" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.405570 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7sq97\" (UniqueName: \"kubernetes.io/projected/1c2319e5-acee-42d1-8d43-b3bddb18f996-kube-api-access-7sq97\") pod \"auto-csr-approver-29462998-sj8hm\" (UID: \"1c2319e5-acee-42d1-8d43-b3bddb18f996\") " pod="openshift-infra/auto-csr-approver-29462998-sj8hm" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.440194 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sq97\" (UniqueName: \"kubernetes.io/projected/1c2319e5-acee-42d1-8d43-b3bddb18f996-kube-api-access-7sq97\") pod \"auto-csr-approver-29462998-sj8hm\" (UID: \"1c2319e5-acee-42d1-8d43-b3bddb18f996\") " pod="openshift-infra/auto-csr-approver-29462998-sj8hm" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.522712 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29462998-sj8hm" Jan 07 09:58:00 crc kubenswrapper[5131]: I0107 09:58:00.749243 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29462998-sj8hm"] Jan 07 09:58:01 crc kubenswrapper[5131]: I0107 09:58:01.561233 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29462998-sj8hm" event={"ID":"1c2319e5-acee-42d1-8d43-b3bddb18f996","Type":"ContainerStarted","Data":"83be8fefdd8441689ef15617bc02a2e65ad382fd221057338634dff8a0b406ea"} Jan 07 09:58:04 crc kubenswrapper[5131]: I0107 09:58:04.258539 5131 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-mxks8" Jan 07 09:58:04 crc kubenswrapper[5131]: I0107 09:58:04.293024 5131 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-mxks8" Jan 07 09:58:04 crc kubenswrapper[5131]: I0107 09:58:04.583020 5131 generic.go:358] "Generic (PLEG): container finished" podID="1c2319e5-acee-42d1-8d43-b3bddb18f996" containerID="adfa71f8b7265e1000e225847369c7e3f9c04acb7e3c76e4944fa28536eafe5d" exitCode=0 Jan 07 09:58:04 crc kubenswrapper[5131]: I0107 09:58:04.583090 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29462998-sj8hm" event={"ID":"1c2319e5-acee-42d1-8d43-b3bddb18f996","Type":"ContainerDied","Data":"adfa71f8b7265e1000e225847369c7e3f9c04acb7e3c76e4944fa28536eafe5d"} Jan 07 09:58:05 crc kubenswrapper[5131]: I0107 09:58:05.295515 5131 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-06 09:53:04 +0000 UTC" deadline="2026-02-02 01:29:35.555179077 +0000 UTC" Jan 07 09:58:05 crc kubenswrapper[5131]: I0107 09:58:05.295591 5131 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="615h31m30.259595094s" Jan 07 09:58:05 crc kubenswrapper[5131]: I0107 09:58:05.849700 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29462998-sj8hm" Jan 07 09:58:05 crc kubenswrapper[5131]: I0107 09:58:05.978332 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sq97\" (UniqueName: \"kubernetes.io/projected/1c2319e5-acee-42d1-8d43-b3bddb18f996-kube-api-access-7sq97\") pod \"1c2319e5-acee-42d1-8d43-b3bddb18f996\" (UID: \"1c2319e5-acee-42d1-8d43-b3bddb18f996\") " Jan 07 09:58:05 crc kubenswrapper[5131]: I0107 09:58:05.986059 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2319e5-acee-42d1-8d43-b3bddb18f996-kube-api-access-7sq97" (OuterVolumeSpecName: "kube-api-access-7sq97") pod "1c2319e5-acee-42d1-8d43-b3bddb18f996" (UID: "1c2319e5-acee-42d1-8d43-b3bddb18f996"). InnerVolumeSpecName "kube-api-access-7sq97". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 09:58:06 crc kubenswrapper[5131]: I0107 09:58:06.080963 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7sq97\" (UniqueName: \"kubernetes.io/projected/1c2319e5-acee-42d1-8d43-b3bddb18f996-kube-api-access-7sq97\") on node \"crc\" DevicePath \"\"" Jan 07 09:58:06 crc kubenswrapper[5131]: I0107 09:58:06.295892 5131 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-06 09:53:04 +0000 UTC" deadline="2026-01-31 18:34:45.168367983 +0000 UTC" Jan 07 09:58:06 crc kubenswrapper[5131]: I0107 09:58:06.295965 5131 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="584h36m38.872408803s" Jan 07 09:58:06 crc kubenswrapper[5131]: I0107 09:58:06.602632 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29462998-sj8hm" event={"ID":"1c2319e5-acee-42d1-8d43-b3bddb18f996","Type":"ContainerDied","Data":"83be8fefdd8441689ef15617bc02a2e65ad382fd221057338634dff8a0b406ea"} Jan 07 09:58:06 crc kubenswrapper[5131]: I0107 09:58:06.602668 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83be8fefdd8441689ef15617bc02a2e65ad382fd221057338634dff8a0b406ea" Jan 07 09:58:06 crc kubenswrapper[5131]: I0107 09:58:06.602682 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29462998-sj8hm" Jan 07 09:59:32 crc kubenswrapper[5131]: I0107 09:59:32.459997 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 09:59:32 crc kubenswrapper[5131]: I0107 09:59:32.460368 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 09:59:50 crc kubenswrapper[5131]: I0107 09:59:50.663636 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 09:59:50 crc kubenswrapper[5131]: I0107 09:59:50.664586 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.150088 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463000-2q7zm"] Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.151726 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1c2319e5-acee-42d1-8d43-b3bddb18f996" containerName="oc" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.151749 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2319e5-acee-42d1-8d43-b3bddb18f996" containerName="oc" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.151973 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="1c2319e5-acee-42d1-8d43-b3bddb18f996" containerName="oc" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.189367 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463000-2q7zm" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.191360 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.192749 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.194006 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.198470 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf"] Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.203186 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf"] Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.203220 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463000-2q7zm"] Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.203336 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.205272 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.205594 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.341001 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a9c5ff-b148-41c0-8572-bb73a4c1d182-secret-volume\") pod \"collect-profiles-29463000-nv4wf\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.341190 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xztp\" (UniqueName: \"kubernetes.io/projected/d07f612e-1eb1-4386-936a-12fec40a84d2-kube-api-access-9xztp\") pod \"auto-csr-approver-29463000-2q7zm\" (UID: \"d07f612e-1eb1-4386-936a-12fec40a84d2\") " pod="openshift-infra/auto-csr-approver-29463000-2q7zm" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.341236 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a9c5ff-b148-41c0-8572-bb73a4c1d182-config-volume\") pod \"collect-profiles-29463000-nv4wf\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.341270 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbkjw\" (UniqueName: \"kubernetes.io/projected/18a9c5ff-b148-41c0-8572-bb73a4c1d182-kube-api-access-vbkjw\") pod \"collect-profiles-29463000-nv4wf\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.442772 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a9c5ff-b148-41c0-8572-bb73a4c1d182-secret-volume\") pod \"collect-profiles-29463000-nv4wf\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.442930 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xztp\" (UniqueName: \"kubernetes.io/projected/d07f612e-1eb1-4386-936a-12fec40a84d2-kube-api-access-9xztp\") pod \"auto-csr-approver-29463000-2q7zm\" (UID: \"d07f612e-1eb1-4386-936a-12fec40a84d2\") " pod="openshift-infra/auto-csr-approver-29463000-2q7zm" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.443016 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a9c5ff-b148-41c0-8572-bb73a4c1d182-config-volume\") pod \"collect-profiles-29463000-nv4wf\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.443061 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbkjw\" (UniqueName: \"kubernetes.io/projected/18a9c5ff-b148-41c0-8572-bb73a4c1d182-kube-api-access-vbkjw\") pod \"collect-profiles-29463000-nv4wf\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.444870 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a9c5ff-b148-41c0-8572-bb73a4c1d182-config-volume\") pod \"collect-profiles-29463000-nv4wf\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.453770 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a9c5ff-b148-41c0-8572-bb73a4c1d182-secret-volume\") pod \"collect-profiles-29463000-nv4wf\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.468662 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xztp\" (UniqueName: \"kubernetes.io/projected/d07f612e-1eb1-4386-936a-12fec40a84d2-kube-api-access-9xztp\") pod \"auto-csr-approver-29463000-2q7zm\" (UID: \"d07f612e-1eb1-4386-936a-12fec40a84d2\") " pod="openshift-infra/auto-csr-approver-29463000-2q7zm" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.478192 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbkjw\" (UniqueName: \"kubernetes.io/projected/18a9c5ff-b148-41c0-8572-bb73a4c1d182-kube-api-access-vbkjw\") pod \"collect-profiles-29463000-nv4wf\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.510573 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463000-2q7zm" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.525887 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.795596 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf"] Jan 07 10:00:00 crc kubenswrapper[5131]: W0107 10:00:00.803420 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a9c5ff_b148_41c0_8572_bb73a4c1d182.slice/crio-4669524973a1396f1a26e22688a92e6775fb93ce4dfd63cc858ad94464a9fdde WatchSource:0}: Error finding container 4669524973a1396f1a26e22688a92e6775fb93ce4dfd63cc858ad94464a9fdde: Status 404 returned error can't find the container with id 4669524973a1396f1a26e22688a92e6775fb93ce4dfd63cc858ad94464a9fdde Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.808037 5131 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 10:00:00 crc kubenswrapper[5131]: W0107 10:00:00.823007 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd07f612e_1eb1_4386_936a_12fec40a84d2.slice/crio-60f4755a187e92dcb3634284bbdf4f3bf8b085e7d1bdbf34a1f27e8fc971ef8d WatchSource:0}: Error finding container 60f4755a187e92dcb3634284bbdf4f3bf8b085e7d1bdbf34a1f27e8fc971ef8d: Status 404 returned error can't find the container with id 60f4755a187e92dcb3634284bbdf4f3bf8b085e7d1bdbf34a1f27e8fc971ef8d Jan 07 10:00:00 crc kubenswrapper[5131]: I0107 10:00:00.824448 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463000-2q7zm"] Jan 07 10:00:01 crc kubenswrapper[5131]: I0107 10:00:01.393433 5131 generic.go:358] "Generic (PLEG): container finished" podID="18a9c5ff-b148-41c0-8572-bb73a4c1d182" containerID="2f86fb65293b811dd6a0eb0c0f9595f9b2bcee7c97202023622026d418e7c776" exitCode=0 Jan 07 10:00:01 crc kubenswrapper[5131]: I0107 10:00:01.393527 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" event={"ID":"18a9c5ff-b148-41c0-8572-bb73a4c1d182","Type":"ContainerDied","Data":"2f86fb65293b811dd6a0eb0c0f9595f9b2bcee7c97202023622026d418e7c776"} Jan 07 10:00:01 crc kubenswrapper[5131]: I0107 10:00:01.394027 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" event={"ID":"18a9c5ff-b148-41c0-8572-bb73a4c1d182","Type":"ContainerStarted","Data":"4669524973a1396f1a26e22688a92e6775fb93ce4dfd63cc858ad94464a9fdde"} Jan 07 10:00:01 crc kubenswrapper[5131]: I0107 10:00:01.395096 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463000-2q7zm" event={"ID":"d07f612e-1eb1-4386-936a-12fec40a84d2","Type":"ContainerStarted","Data":"60f4755a187e92dcb3634284bbdf4f3bf8b085e7d1bdbf34a1f27e8fc971ef8d"} Jan 07 10:00:02 crc kubenswrapper[5131]: I0107 10:00:02.659437 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:02 crc kubenswrapper[5131]: I0107 10:00:02.775349 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbkjw\" (UniqueName: \"kubernetes.io/projected/18a9c5ff-b148-41c0-8572-bb73a4c1d182-kube-api-access-vbkjw\") pod \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " Jan 07 10:00:02 crc kubenswrapper[5131]: I0107 10:00:02.775681 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a9c5ff-b148-41c0-8572-bb73a4c1d182-config-volume\") pod \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " Jan 07 10:00:02 crc kubenswrapper[5131]: I0107 10:00:02.775709 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a9c5ff-b148-41c0-8572-bb73a4c1d182-secret-volume\") pod \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\" (UID: \"18a9c5ff-b148-41c0-8572-bb73a4c1d182\") " Jan 07 10:00:02 crc kubenswrapper[5131]: I0107 10:00:02.776335 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a9c5ff-b148-41c0-8572-bb73a4c1d182-config-volume" (OuterVolumeSpecName: "config-volume") pod "18a9c5ff-b148-41c0-8572-bb73a4c1d182" (UID: "18a9c5ff-b148-41c0-8572-bb73a4c1d182"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:00:02 crc kubenswrapper[5131]: I0107 10:00:02.780779 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a9c5ff-b148-41c0-8572-bb73a4c1d182-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "18a9c5ff-b148-41c0-8572-bb73a4c1d182" (UID: "18a9c5ff-b148-41c0-8572-bb73a4c1d182"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:00:02 crc kubenswrapper[5131]: I0107 10:00:02.798576 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a9c5ff-b148-41c0-8572-bb73a4c1d182-kube-api-access-vbkjw" (OuterVolumeSpecName: "kube-api-access-vbkjw") pod "18a9c5ff-b148-41c0-8572-bb73a4c1d182" (UID: "18a9c5ff-b148-41c0-8572-bb73a4c1d182"). InnerVolumeSpecName "kube-api-access-vbkjw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:00:02 crc kubenswrapper[5131]: I0107 10:00:02.877441 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vbkjw\" (UniqueName: \"kubernetes.io/projected/18a9c5ff-b148-41c0-8572-bb73a4c1d182-kube-api-access-vbkjw\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:02 crc kubenswrapper[5131]: I0107 10:00:02.877485 5131 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a9c5ff-b148-41c0-8572-bb73a4c1d182-config-volume\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:02 crc kubenswrapper[5131]: I0107 10:00:02.877494 5131 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a9c5ff-b148-41c0-8572-bb73a4c1d182-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:03 crc kubenswrapper[5131]: I0107 10:00:03.412264 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" event={"ID":"18a9c5ff-b148-41c0-8572-bb73a4c1d182","Type":"ContainerDied","Data":"4669524973a1396f1a26e22688a92e6775fb93ce4dfd63cc858ad94464a9fdde"} Jan 07 10:00:03 crc kubenswrapper[5131]: I0107 10:00:03.412327 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4669524973a1396f1a26e22688a92e6775fb93ce4dfd63cc858ad94464a9fdde" Jan 07 10:00:03 crc kubenswrapper[5131]: I0107 10:00:03.412283 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29463000-nv4wf" Jan 07 10:00:03 crc kubenswrapper[5131]: I0107 10:00:03.415613 5131 generic.go:358] "Generic (PLEG): container finished" podID="d07f612e-1eb1-4386-936a-12fec40a84d2" containerID="a8b1b2b71911c1e028a62910b91917425f2697eceb43a024e5f795f9223fde60" exitCode=0 Jan 07 10:00:03 crc kubenswrapper[5131]: I0107 10:00:03.415690 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463000-2q7zm" event={"ID":"d07f612e-1eb1-4386-936a-12fec40a84d2","Type":"ContainerDied","Data":"a8b1b2b71911c1e028a62910b91917425f2697eceb43a024e5f795f9223fde60"} Jan 07 10:00:04 crc kubenswrapper[5131]: I0107 10:00:04.641356 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463000-2q7zm" Jan 07 10:00:04 crc kubenswrapper[5131]: I0107 10:00:04.803012 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xztp\" (UniqueName: \"kubernetes.io/projected/d07f612e-1eb1-4386-936a-12fec40a84d2-kube-api-access-9xztp\") pod \"d07f612e-1eb1-4386-936a-12fec40a84d2\" (UID: \"d07f612e-1eb1-4386-936a-12fec40a84d2\") " Jan 07 10:00:04 crc kubenswrapper[5131]: I0107 10:00:04.814231 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07f612e-1eb1-4386-936a-12fec40a84d2-kube-api-access-9xztp" (OuterVolumeSpecName: "kube-api-access-9xztp") pod "d07f612e-1eb1-4386-936a-12fec40a84d2" (UID: "d07f612e-1eb1-4386-936a-12fec40a84d2"). InnerVolumeSpecName "kube-api-access-9xztp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:00:04 crc kubenswrapper[5131]: I0107 10:00:04.905577 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9xztp\" (UniqueName: \"kubernetes.io/projected/d07f612e-1eb1-4386-936a-12fec40a84d2-kube-api-access-9xztp\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:05 crc kubenswrapper[5131]: I0107 10:00:05.431238 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463000-2q7zm" event={"ID":"d07f612e-1eb1-4386-936a-12fec40a84d2","Type":"ContainerDied","Data":"60f4755a187e92dcb3634284bbdf4f3bf8b085e7d1bdbf34a1f27e8fc971ef8d"} Jan 07 10:00:05 crc kubenswrapper[5131]: I0107 10:00:05.431266 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463000-2q7zm" Jan 07 10:00:05 crc kubenswrapper[5131]: I0107 10:00:05.431292 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60f4755a187e92dcb3634284bbdf4f3bf8b085e7d1bdbf34a1f27e8fc971ef8d" Jan 07 10:00:20 crc kubenswrapper[5131]: I0107 10:00:20.663180 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:00:20 crc kubenswrapper[5131]: I0107 10:00:20.663688 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:00:47 crc kubenswrapper[5131]: I0107 10:00:47.867073 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4"] Jan 07 10:00:47 crc kubenswrapper[5131]: I0107 10:00:47.867957 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" podUID="ad935b69-bef7-46a2-a03a-367404c13329" containerName="kube-rbac-proxy" containerID="cri-o://d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179" gracePeriod=30 Jan 07 10:00:47 crc kubenswrapper[5131]: I0107 10:00:47.868348 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" podUID="ad935b69-bef7-46a2-a03a-367404c13329" containerName="ovnkube-cluster-manager" containerID="cri-o://0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a" gracePeriod=30 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.046957 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kpj7m"] Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.047680 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovn-controller" containerID="cri-o://04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584" gracePeriod=30 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.047703 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="nbdb" containerID="cri-o://92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856" gracePeriod=30 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.047941 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="northd" containerID="cri-o://45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e" gracePeriod=30 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.047902 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="kube-rbac-proxy-node" containerID="cri-o://5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a" gracePeriod=30 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.048020 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovn-acl-logging" containerID="cri-o://f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c" gracePeriod=30 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.048047 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837" gracePeriod=30 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.047750 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="sbdb" containerID="cri-o://95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f" gracePeriod=30 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.079854 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.080888 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovnkube-controller" containerID="cri-o://126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5" gracePeriod=30 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109158 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq"] Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109778 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad935b69-bef7-46a2-a03a-367404c13329" containerName="ovnkube-cluster-manager" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109799 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad935b69-bef7-46a2-a03a-367404c13329" containerName="ovnkube-cluster-manager" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109815 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad935b69-bef7-46a2-a03a-367404c13329" containerName="kube-rbac-proxy" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109822 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad935b69-bef7-46a2-a03a-367404c13329" containerName="kube-rbac-proxy" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109847 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="18a9c5ff-b148-41c0-8572-bb73a4c1d182" containerName="collect-profiles" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109857 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a9c5ff-b148-41c0-8572-bb73a4c1d182" containerName="collect-profiles" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109871 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d07f612e-1eb1-4386-936a-12fec40a84d2" containerName="oc" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109878 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07f612e-1eb1-4386-936a-12fec40a84d2" containerName="oc" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109975 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="d07f612e-1eb1-4386-936a-12fec40a84d2" containerName="oc" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109985 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="ad935b69-bef7-46a2-a03a-367404c13329" containerName="kube-rbac-proxy" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.109995 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="18a9c5ff-b148-41c0-8572-bb73a4c1d182" containerName="collect-profiles" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.110002 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="ad935b69-bef7-46a2-a03a-367404c13329" containerName="ovnkube-cluster-manager" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.113336 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.251263 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9czf\" (UniqueName: \"kubernetes.io/projected/ad935b69-bef7-46a2-a03a-367404c13329-kube-api-access-r9czf\") pod \"ad935b69-bef7-46a2-a03a-367404c13329\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.251318 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-ovnkube-config\") pod \"ad935b69-bef7-46a2-a03a-367404c13329\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.251349 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-env-overrides\") pod \"ad935b69-bef7-46a2-a03a-367404c13329\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.251457 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad935b69-bef7-46a2-a03a-367404c13329-ovn-control-plane-metrics-cert\") pod \"ad935b69-bef7-46a2-a03a-367404c13329\" (UID: \"ad935b69-bef7-46a2-a03a-367404c13329\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.251639 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xsk\" (UniqueName: \"kubernetes.io/projected/f1fb1094-265d-4837-b9b8-58afa08a416d-kube-api-access-95xsk\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.251700 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f1fb1094-265d-4837-b9b8-58afa08a416d-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.251770 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f1fb1094-265d-4837-b9b8-58afa08a416d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.251809 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f1fb1094-265d-4837-b9b8-58afa08a416d-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.252216 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ad935b69-bef7-46a2-a03a-367404c13329" (UID: "ad935b69-bef7-46a2-a03a-367404c13329"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.252242 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ad935b69-bef7-46a2-a03a-367404c13329" (UID: "ad935b69-bef7-46a2-a03a-367404c13329"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.259147 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad935b69-bef7-46a2-a03a-367404c13329-kube-api-access-r9czf" (OuterVolumeSpecName: "kube-api-access-r9czf") pod "ad935b69-bef7-46a2-a03a-367404c13329" (UID: "ad935b69-bef7-46a2-a03a-367404c13329"). InnerVolumeSpecName "kube-api-access-r9czf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.259147 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad935b69-bef7-46a2-a03a-367404c13329-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "ad935b69-bef7-46a2-a03a-367404c13329" (UID: "ad935b69-bef7-46a2-a03a-367404c13329"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.352429 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f1fb1094-265d-4837-b9b8-58afa08a416d-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.352603 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f1fb1094-265d-4837-b9b8-58afa08a416d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.352652 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f1fb1094-265d-4837-b9b8-58afa08a416d-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.352684 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-95xsk\" (UniqueName: \"kubernetes.io/projected/f1fb1094-265d-4837-b9b8-58afa08a416d-kube-api-access-95xsk\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.352758 5131 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad935b69-bef7-46a2-a03a-367404c13329-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.352771 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r9czf\" (UniqueName: \"kubernetes.io/projected/ad935b69-bef7-46a2-a03a-367404c13329-kube-api-access-r9czf\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.352779 5131 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.352788 5131 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad935b69-bef7-46a2-a03a-367404c13329-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.353217 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f1fb1094-265d-4837-b9b8-58afa08a416d-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.353458 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f1fb1094-265d-4837-b9b8-58afa08a416d-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.358186 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f1fb1094-265d-4837-b9b8-58afa08a416d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.371167 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-95xsk\" (UniqueName: \"kubernetes.io/projected/f1fb1094-265d-4837-b9b8-58afa08a416d-kube-api-access-95xsk\") pod \"ovnkube-control-plane-97c9b6c48-8cfsq\" (UID: \"f1fb1094-265d-4837-b9b8-58afa08a416d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.455590 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.720059 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpj7m_592342ad-cf5f-4290-aa15-e99a6454cbf5/ovn-acl-logging/0.log" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.720526 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpj7m_592342ad-cf5f-4290-aa15-e99a6454cbf5/ovn-controller/0.log" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.720986 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.745936 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.745996 5131 generic.go:358] "Generic (PLEG): container finished" podID="a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1" containerID="6ec74912c138c89ccad68970857fdf60edfad26d60a9ecc7be033be6f8349b05" exitCode=2 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.746093 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wcqw9" event={"ID":"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1","Type":"ContainerDied","Data":"6ec74912c138c89ccad68970857fdf60edfad26d60a9ecc7be033be6f8349b05"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.746854 5131 scope.go:117] "RemoveContainer" containerID="6ec74912c138c89ccad68970857fdf60edfad26d60a9ecc7be033be6f8349b05" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.747919 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" event={"ID":"f1fb1094-265d-4837-b9b8-58afa08a416d","Type":"ContainerStarted","Data":"cda7d93f7baa8b49ff2762f61b57cf6b7fdcf099d366c15db5daa8937be00aed"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.755491 5131 generic.go:358] "Generic (PLEG): container finished" podID="ad935b69-bef7-46a2-a03a-367404c13329" containerID="0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a" exitCode=0 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.755537 5131 generic.go:358] "Generic (PLEG): container finished" podID="ad935b69-bef7-46a2-a03a-367404c13329" containerID="d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179" exitCode=0 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.755800 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" event={"ID":"ad935b69-bef7-46a2-a03a-367404c13329","Type":"ContainerDied","Data":"0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.755933 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" event={"ID":"ad935b69-bef7-46a2-a03a-367404c13329","Type":"ContainerDied","Data":"d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.755957 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" event={"ID":"ad935b69-bef7-46a2-a03a-367404c13329","Type":"ContainerDied","Data":"cf0149ee7495c3cc741d9ef73df2e4298e45d78b190a036516c810fbb965a563"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.756009 5131 scope.go:117] "RemoveContainer" containerID="0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.756174 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.786488 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6zhj6"] Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.788335 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="sbdb" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.788524 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="sbdb" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.788636 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovn-controller" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.788793 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovn-controller" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.788981 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="northd" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.789075 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="northd" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.789143 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="nbdb" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.789213 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="nbdb" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.789279 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovn-acl-logging" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.789539 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovn-acl-logging" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.789631 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="kube-rbac-proxy-ovn-metrics" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.789779 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="kube-rbac-proxy-ovn-metrics" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.789899 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovnkube-controller" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.790005 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovnkube-controller" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.790501 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="kube-rbac-proxy-node" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.790586 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="kube-rbac-proxy-node" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.791387 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="kubecfg-setup" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.791472 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="kubecfg-setup" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.791761 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovn-controller" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.791877 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="sbdb" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.791949 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="kube-rbac-proxy-ovn-metrics" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.792049 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovn-acl-logging" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.792109 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="ovnkube-controller" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.792163 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="northd" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.792207 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="nbdb" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.792263 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerName="kube-rbac-proxy-node" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.796360 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpj7m_592342ad-cf5f-4290-aa15-e99a6454cbf5/ovn-acl-logging/0.log" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.796921 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpj7m_592342ad-cf5f-4290-aa15-e99a6454cbf5/ovn-controller/0.log" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.797259 5131 generic.go:358] "Generic (PLEG): container finished" podID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerID="126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5" exitCode=0 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.797286 5131 generic.go:358] "Generic (PLEG): container finished" podID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerID="95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f" exitCode=0 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.797298 5131 generic.go:358] "Generic (PLEG): container finished" podID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerID="92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856" exitCode=0 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.797309 5131 generic.go:358] "Generic (PLEG): container finished" podID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerID="45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e" exitCode=0 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.797318 5131 generic.go:358] "Generic (PLEG): container finished" podID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerID="88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837" exitCode=0 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.797327 5131 generic.go:358] "Generic (PLEG): container finished" podID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerID="5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a" exitCode=0 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.797335 5131 generic.go:358] "Generic (PLEG): container finished" podID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerID="f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c" exitCode=143 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.797343 5131 generic.go:358] "Generic (PLEG): container finished" podID="592342ad-cf5f-4290-aa15-e99a6454cbf5" containerID="04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584" exitCode=143 Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.803268 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerDied","Data":"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.803960 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerDied","Data":"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804001 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerDied","Data":"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804013 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerDied","Data":"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804025 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerDied","Data":"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804037 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerDied","Data":"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804050 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804064 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804071 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804078 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804086 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804094 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804100 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804115 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804122 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804127 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804132 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerDied","Data":"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804295 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804320 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804330 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804339 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804348 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804356 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804366 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804374 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804382 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804400 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerDied","Data":"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804417 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804428 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804437 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804446 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804455 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804463 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804472 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804480 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804488 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804501 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" event={"ID":"592342ad-cf5f-4290-aa15-e99a6454cbf5","Type":"ContainerDied","Data":"6920e97d4ae3db7ace2a35f2b0285671fe6c1cb143daeda1d12ff8dfe1d750af"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804514 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804523 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804532 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804540 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804547 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804556 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804563 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804573 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.804582 5131 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535"} Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.803883 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kpj7m" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.816060 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4"] Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.819200 5131 scope.go:117] "RemoveContainer" containerID="d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.821209 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-n4kr4"] Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.835792 5131 scope.go:117] "RemoveContainer" containerID="0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a" Jan 07 10:00:48 crc kubenswrapper[5131]: E0107 10:00:48.836310 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a\": container with ID starting with 0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a not found: ID does not exist" containerID="0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.836350 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a"} err="failed to get container status \"0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a\": rpc error: code = NotFound desc = could not find container \"0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a\": container with ID starting with 0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.836376 5131 scope.go:117] "RemoveContainer" containerID="d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179" Jan 07 10:00:48 crc kubenswrapper[5131]: E0107 10:00:48.836768 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179\": container with ID starting with d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179 not found: ID does not exist" containerID="d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.836788 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179"} err="failed to get container status \"d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179\": rpc error: code = NotFound desc = could not find container \"d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179\": container with ID starting with d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.836801 5131 scope.go:117] "RemoveContainer" containerID="0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.837067 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a"} err="failed to get container status \"0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a\": rpc error: code = NotFound desc = could not find container \"0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a\": container with ID starting with 0b7b88b93e617551aab3b962425d9f62ab7ad5827a4ff558cd757a590855d31a not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.837085 5131 scope.go:117] "RemoveContainer" containerID="d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.837388 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179"} err="failed to get container status \"d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179\": rpc error: code = NotFound desc = could not find container \"d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179\": container with ID starting with d00c3a62d34136628bd91ada478ee07d51f7a815da74ebaa5735bbc078e2e179 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.837418 5131 scope.go:117] "RemoveContainer" containerID="126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.859938 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-etc-openvswitch\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.860035 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.860311 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-kubelet\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.860458 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-systemd-units\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.860593 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-netd\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.860508 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.860548 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.860802 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.860905 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-slash" (OuterVolumeSpecName: "host-slash") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.861106 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-slash\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.861287 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.861392 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.861408 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-script-lib\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.861662 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78wtj\" (UniqueName: \"kubernetes.io/projected/592342ad-cf5f-4290-aa15-e99a6454cbf5-kube-api-access-78wtj\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.861792 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-node-log\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.861993 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-ovn\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.862083 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-log-socket\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.862197 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-config\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.862277 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.862329 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.862405 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-bin\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.862530 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-log-socket" (OuterVolumeSpecName: "log-socket") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.862573 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-node-log" (OuterVolumeSpecName: "node-log") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.862600 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.862726 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-systemd\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.862910 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-var-lib-openvswitch\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.863005 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.863112 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.863576 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-ovn-kubernetes\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.863688 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovn-node-metrics-cert\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.863796 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-openvswitch\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.863940 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-env-overrides\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.864062 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-netns\") pod \"592342ad-cf5f-4290-aa15-e99a6454cbf5\" (UID: \"592342ad-cf5f-4290-aa15-e99a6454cbf5\") " Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.863761 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.864252 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.864458 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-kubelet\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.864540 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.864637 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.864828 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-cni-bin\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.865010 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.865162 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-ovnkube-script-lib\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.866522 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-ovn-node-metrics-cert\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.866689 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-log-socket\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.866922 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-systemd-units\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.867092 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-env-overrides\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.867248 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-slash\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.867487 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-run-systemd\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.864748 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.867728 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-etc-openvswitch\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.867874 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-node-log\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.868984 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.869049 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592342ad-cf5f-4290-aa15-e99a6454cbf5-kube-api-access-78wtj" (OuterVolumeSpecName: "kube-api-access-78wtj") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "kube-api-access-78wtj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.869329 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-run-openvswitch\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.869383 5131 scope.go:117] "RemoveContainer" containerID="95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.869486 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-run-ovn\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.869711 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-var-lib-openvswitch\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.869959 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-ovnkube-config\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.870203 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-cni-netd\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.870359 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsv2c\" (UniqueName: \"kubernetes.io/projected/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-kube-api-access-nsv2c\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.870623 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-run-netns\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.871765 5131 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.871891 5131 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-log-socket\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.872012 5131 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.872119 5131 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.872235 5131 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.872358 5131 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.872476 5131 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.872589 5131 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.872697 5131 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.872871 5131 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.873546 5131 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.873645 5131 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.873730 5131 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.873864 5131 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.874200 5131 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-slash\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.875660 5131 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.875762 5131 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/592342ad-cf5f-4290-aa15-e99a6454cbf5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.875875 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-78wtj\" (UniqueName: \"kubernetes.io/projected/592342ad-cf5f-4290-aa15-e99a6454cbf5-kube-api-access-78wtj\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.876031 5131 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-node-log\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.880184 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "592342ad-cf5f-4290-aa15-e99a6454cbf5" (UID: "592342ad-cf5f-4290-aa15-e99a6454cbf5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.886986 5131 scope.go:117] "RemoveContainer" containerID="92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.904024 5131 scope.go:117] "RemoveContainer" containerID="45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.919048 5131 scope.go:117] "RemoveContainer" containerID="88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.931193 5131 scope.go:117] "RemoveContainer" containerID="5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.943595 5131 scope.go:117] "RemoveContainer" containerID="f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.958143 5131 scope.go:117] "RemoveContainer" containerID="04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.972877 5131 scope.go:117] "RemoveContainer" containerID="aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.976863 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-run-systemd\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.976900 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-etc-openvswitch\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.976948 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-node-log\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.976977 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-run-openvswitch\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977002 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-run-ovn\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977024 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-var-lib-openvswitch\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977044 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-ovnkube-config\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977064 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-cni-netd\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977086 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nsv2c\" (UniqueName: \"kubernetes.io/projected/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-kube-api-access-nsv2c\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977117 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-run-netns\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977159 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-kubelet\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977228 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977256 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-cni-bin\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977297 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977328 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-ovnkube-script-lib\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977358 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-ovn-node-metrics-cert\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977387 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-log-socket\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977410 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-systemd-units\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977435 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-env-overrides\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977471 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-slash\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977519 5131 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/592342ad-cf5f-4290-aa15-e99a6454cbf5-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977574 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-slash\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977616 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-run-systemd\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977642 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-etc-openvswitch\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977671 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-node-log\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977698 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-run-openvswitch\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977727 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-run-ovn\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.977754 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-var-lib-openvswitch\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.978674 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-ovnkube-config\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.978678 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-cni-netd\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.979107 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-run-netns\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.979166 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-kubelet\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.979212 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.979261 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-cni-bin\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.979302 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.979354 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-log-socket\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.979959 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-systemd-units\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.980114 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-ovnkube-script-lib\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.980361 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-env-overrides\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.983769 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-ovn-node-metrics-cert\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.987978 5131 scope.go:117] "RemoveContainer" containerID="126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5" Jan 07 10:00:48 crc kubenswrapper[5131]: E0107 10:00:48.988351 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5\": container with ID starting with 126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5 not found: ID does not exist" containerID="126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.988378 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5"} err="failed to get container status \"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5\": rpc error: code = NotFound desc = could not find container \"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5\": container with ID starting with 126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.988395 5131 scope.go:117] "RemoveContainer" containerID="95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f" Jan 07 10:00:48 crc kubenswrapper[5131]: E0107 10:00:48.988602 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f\": container with ID starting with 95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f not found: ID does not exist" containerID="95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.988621 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f"} err="failed to get container status \"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f\": rpc error: code = NotFound desc = could not find container \"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f\": container with ID starting with 95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.988638 5131 scope.go:117] "RemoveContainer" containerID="92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856" Jan 07 10:00:48 crc kubenswrapper[5131]: E0107 10:00:48.988917 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856\": container with ID starting with 92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856 not found: ID does not exist" containerID="92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.988938 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856"} err="failed to get container status \"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856\": rpc error: code = NotFound desc = could not find container \"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856\": container with ID starting with 92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.988950 5131 scope.go:117] "RemoveContainer" containerID="45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e" Jan 07 10:00:48 crc kubenswrapper[5131]: E0107 10:00:48.989144 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e\": container with ID starting with 45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e not found: ID does not exist" containerID="45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.989161 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e"} err="failed to get container status \"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e\": rpc error: code = NotFound desc = could not find container \"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e\": container with ID starting with 45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.989172 5131 scope.go:117] "RemoveContainer" containerID="88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837" Jan 07 10:00:48 crc kubenswrapper[5131]: E0107 10:00:48.989341 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837\": container with ID starting with 88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837 not found: ID does not exist" containerID="88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.989358 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837"} err="failed to get container status \"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837\": rpc error: code = NotFound desc = could not find container \"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837\": container with ID starting with 88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.989368 5131 scope.go:117] "RemoveContainer" containerID="5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a" Jan 07 10:00:48 crc kubenswrapper[5131]: E0107 10:00:48.989526 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a\": container with ID starting with 5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a not found: ID does not exist" containerID="5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.989541 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a"} err="failed to get container status \"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a\": rpc error: code = NotFound desc = could not find container \"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a\": container with ID starting with 5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.989552 5131 scope.go:117] "RemoveContainer" containerID="f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c" Jan 07 10:00:48 crc kubenswrapper[5131]: E0107 10:00:48.989704 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c\": container with ID starting with f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c not found: ID does not exist" containerID="f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.989720 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c"} err="failed to get container status \"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c\": rpc error: code = NotFound desc = could not find container \"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c\": container with ID starting with f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.989730 5131 scope.go:117] "RemoveContainer" containerID="04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584" Jan 07 10:00:48 crc kubenswrapper[5131]: E0107 10:00:48.990389 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584\": container with ID starting with 04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584 not found: ID does not exist" containerID="04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.990414 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584"} err="failed to get container status \"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584\": rpc error: code = NotFound desc = could not find container \"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584\": container with ID starting with 04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.990428 5131 scope.go:117] "RemoveContainer" containerID="aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535" Jan 07 10:00:48 crc kubenswrapper[5131]: E0107 10:00:48.990606 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535\": container with ID starting with aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535 not found: ID does not exist" containerID="aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.990624 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535"} err="failed to get container status \"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535\": rpc error: code = NotFound desc = could not find container \"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535\": container with ID starting with aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.990635 5131 scope.go:117] "RemoveContainer" containerID="126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.990794 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5"} err="failed to get container status \"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5\": rpc error: code = NotFound desc = could not find container \"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5\": container with ID starting with 126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.990808 5131 scope.go:117] "RemoveContainer" containerID="95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.991148 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f"} err="failed to get container status \"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f\": rpc error: code = NotFound desc = could not find container \"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f\": container with ID starting with 95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.991163 5131 scope.go:117] "RemoveContainer" containerID="92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.991336 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856"} err="failed to get container status \"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856\": rpc error: code = NotFound desc = could not find container \"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856\": container with ID starting with 92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.991351 5131 scope.go:117] "RemoveContainer" containerID="45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.991502 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e"} err="failed to get container status \"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e\": rpc error: code = NotFound desc = could not find container \"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e\": container with ID starting with 45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.991515 5131 scope.go:117] "RemoveContainer" containerID="88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.991668 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837"} err="failed to get container status \"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837\": rpc error: code = NotFound desc = could not find container \"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837\": container with ID starting with 88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.991682 5131 scope.go:117] "RemoveContainer" containerID="5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.992113 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a"} err="failed to get container status \"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a\": rpc error: code = NotFound desc = could not find container \"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a\": container with ID starting with 5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.992218 5131 scope.go:117] "RemoveContainer" containerID="f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.992520 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c"} err="failed to get container status \"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c\": rpc error: code = NotFound desc = could not find container \"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c\": container with ID starting with f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.992541 5131 scope.go:117] "RemoveContainer" containerID="04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.992749 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584"} err="failed to get container status \"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584\": rpc error: code = NotFound desc = could not find container \"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584\": container with ID starting with 04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.992766 5131 scope.go:117] "RemoveContainer" containerID="aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.992990 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535"} err="failed to get container status \"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535\": rpc error: code = NotFound desc = could not find container \"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535\": container with ID starting with aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.993009 5131 scope.go:117] "RemoveContainer" containerID="126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.993225 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5"} err="failed to get container status \"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5\": rpc error: code = NotFound desc = could not find container \"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5\": container with ID starting with 126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.993283 5131 scope.go:117] "RemoveContainer" containerID="95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.993752 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f"} err="failed to get container status \"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f\": rpc error: code = NotFound desc = could not find container \"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f\": container with ID starting with 95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.993771 5131 scope.go:117] "RemoveContainer" containerID="92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.993995 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856"} err="failed to get container status \"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856\": rpc error: code = NotFound desc = could not find container \"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856\": container with ID starting with 92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.994011 5131 scope.go:117] "RemoveContainer" containerID="45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.994200 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e"} err="failed to get container status \"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e\": rpc error: code = NotFound desc = could not find container \"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e\": container with ID starting with 45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.994219 5131 scope.go:117] "RemoveContainer" containerID="88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.994445 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837"} err="failed to get container status \"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837\": rpc error: code = NotFound desc = could not find container \"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837\": container with ID starting with 88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.994463 5131 scope.go:117] "RemoveContainer" containerID="5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.994650 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a"} err="failed to get container status \"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a\": rpc error: code = NotFound desc = could not find container \"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a\": container with ID starting with 5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.994669 5131 scope.go:117] "RemoveContainer" containerID="f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.994868 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c"} err="failed to get container status \"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c\": rpc error: code = NotFound desc = could not find container \"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c\": container with ID starting with f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.994891 5131 scope.go:117] "RemoveContainer" containerID="04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.995220 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584"} err="failed to get container status \"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584\": rpc error: code = NotFound desc = could not find container \"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584\": container with ID starting with 04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.995242 5131 scope.go:117] "RemoveContainer" containerID="aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.995460 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535"} err="failed to get container status \"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535\": rpc error: code = NotFound desc = could not find container \"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535\": container with ID starting with aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.995480 5131 scope.go:117] "RemoveContainer" containerID="126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.995689 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5"} err="failed to get container status \"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5\": rpc error: code = NotFound desc = could not find container \"126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5\": container with ID starting with 126470e29248b08b35119158ec1d00986e765ade9dce116264bc2f31d71a8be5 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.995709 5131 scope.go:117] "RemoveContainer" containerID="95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.995931 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f"} err="failed to get container status \"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f\": rpc error: code = NotFound desc = could not find container \"95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f\": container with ID starting with 95fb0f05fbec814936f2b0eba3acdb40bb507d796edc8d78c69d4519b53a985f not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.995952 5131 scope.go:117] "RemoveContainer" containerID="92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.996136 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856"} err="failed to get container status \"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856\": rpc error: code = NotFound desc = could not find container \"92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856\": container with ID starting with 92dbe74b1d2e5e7df12950a9d782b5f9bbf56fa3f40e9f9f0b295b3826dcc856 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.996155 5131 scope.go:117] "RemoveContainer" containerID="45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.996336 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e"} err="failed to get container status \"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e\": rpc error: code = NotFound desc = could not find container \"45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e\": container with ID starting with 45a4508f424cf7d346d29d14019a0e8197a69731f0733d68bd8927a5f487751e not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.996356 5131 scope.go:117] "RemoveContainer" containerID="88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.996529 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837"} err="failed to get container status \"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837\": rpc error: code = NotFound desc = could not find container \"88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837\": container with ID starting with 88b5153fcc0ca30117ca94f1936142fa25bb640f9d8a31b37195b519fd101837 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.996552 5131 scope.go:117] "RemoveContainer" containerID="5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.996766 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a"} err="failed to get container status \"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a\": rpc error: code = NotFound desc = could not find container \"5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a\": container with ID starting with 5ca79d5c3178ab5e7bbe17de7a5f72f6a6f44230a4e31508ecc7b285e555a03a not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.996787 5131 scope.go:117] "RemoveContainer" containerID="f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.996997 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c"} err="failed to get container status \"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c\": rpc error: code = NotFound desc = could not find container \"f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c\": container with ID starting with f802473365d7e31f324d34beaa6227d2158e02ba753e62b801fb8b41a09ea25c not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.997020 5131 scope.go:117] "RemoveContainer" containerID="04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.997210 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584"} err="failed to get container status \"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584\": rpc error: code = NotFound desc = could not find container \"04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584\": container with ID starting with 04f53033f9dc881a5ee1e627ed97a9064b5b49efc2ede373e4de37a4df60b584 not found: ID does not exist" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.997231 5131 scope.go:117] "RemoveContainer" containerID="aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535" Jan 07 10:00:48 crc kubenswrapper[5131]: I0107 10:00:48.997456 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535"} err="failed to get container status \"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535\": rpc error: code = NotFound desc = could not find container \"aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535\": container with ID starting with aea9d4515f20cdfbd546b072ef046aa909e6f6410217676d722880631f009535 not found: ID does not exist" Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.007693 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsv2c\" (UniqueName: \"kubernetes.io/projected/e1c4e5a9-733a-471f-b6f0-dfaced935ba8-kube-api-access-nsv2c\") pod \"ovnkube-node-6zhj6\" (UID: \"e1c4e5a9-733a-471f-b6f0-dfaced935ba8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.125477 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.135559 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kpj7m"] Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.140783 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kpj7m"] Jan 07 10:00:49 crc kubenswrapper[5131]: W0107 10:00:49.143586 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1c4e5a9_733a_471f_b6f0_dfaced935ba8.slice/crio-d0ca9c634133c06b17372d8c634a36bb06feedf61a98191b39036fd4856d6e8f WatchSource:0}: Error finding container d0ca9c634133c06b17372d8c634a36bb06feedf61a98191b39036fd4856d6e8f: Status 404 returned error can't find the container with id d0ca9c634133c06b17372d8c634a36bb06feedf61a98191b39036fd4856d6e8f Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.809785 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.810372 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wcqw9" event={"ID":"a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1","Type":"ContainerStarted","Data":"4abee6a59f25285dac5e90e3abaad0c391837ca66f06315740f2be693ef3b6fa"} Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.813142 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" event={"ID":"f1fb1094-265d-4837-b9b8-58afa08a416d","Type":"ContainerStarted","Data":"d9f8aa803a7ed736f41b8d1f3f415642ecd1634b4298d8f05f0ee5faccd971ee"} Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.813200 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" event={"ID":"f1fb1094-265d-4837-b9b8-58afa08a416d","Type":"ContainerStarted","Data":"c6de0b7072f9fed22d297f0aba5dd4a0c16a3fa9187572a6f41078530d0e3357"} Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.820359 5131 generic.go:358] "Generic (PLEG): container finished" podID="e1c4e5a9-733a-471f-b6f0-dfaced935ba8" containerID="5f4e9c3b1df3c5f5553a9cdf65d825e69ec7d925d8e5406bc8fd48d8380d1b47" exitCode=0 Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.820454 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" event={"ID":"e1c4e5a9-733a-471f-b6f0-dfaced935ba8","Type":"ContainerDied","Data":"5f4e9c3b1df3c5f5553a9cdf65d825e69ec7d925d8e5406bc8fd48d8380d1b47"} Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.820498 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" event={"ID":"e1c4e5a9-733a-471f-b6f0-dfaced935ba8","Type":"ContainerStarted","Data":"d0ca9c634133c06b17372d8c634a36bb06feedf61a98191b39036fd4856d6e8f"} Jan 07 10:00:49 crc kubenswrapper[5131]: I0107 10:00:49.878302 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-8cfsq" podStartSLOduration=2.878274094 podStartE2EDuration="2.878274094s" podCreationTimestamp="2026-01-07 10:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:00:49.867562502 +0000 UTC m=+678.033864076" watchObservedRunningTime="2026-01-07 10:00:49.878274094 +0000 UTC m=+678.044575668" Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.193787 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592342ad-cf5f-4290-aa15-e99a6454cbf5" path="/var/lib/kubelet/pods/592342ad-cf5f-4290-aa15-e99a6454cbf5/volumes" Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.195423 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad935b69-bef7-46a2-a03a-367404c13329" path="/var/lib/kubelet/pods/ad935b69-bef7-46a2-a03a-367404c13329/volumes" Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.663267 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.663704 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.663771 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.664670 5131 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95b2f2f38ab6b9d142bf531750364a1f6ffccfcd46ca5680da77d1d639a07cbc"} pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.664792 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" containerID="cri-o://95b2f2f38ab6b9d142bf531750364a1f6ffccfcd46ca5680da77d1d639a07cbc" gracePeriod=600 Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.834618 5131 generic.go:358] "Generic (PLEG): container finished" podID="3942e752-44ba-4678-8723-6cd778e60d73" containerID="95b2f2f38ab6b9d142bf531750364a1f6ffccfcd46ca5680da77d1d639a07cbc" exitCode=0 Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.834907 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerDied","Data":"95b2f2f38ab6b9d142bf531750364a1f6ffccfcd46ca5680da77d1d639a07cbc"} Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.834958 5131 scope.go:117] "RemoveContainer" containerID="e79b67bc8389c68c2ac09cb38bf889a9519e79a63ac71b01c26e01c34973b2a7" Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.839876 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" event={"ID":"e1c4e5a9-733a-471f-b6f0-dfaced935ba8","Type":"ContainerStarted","Data":"f26e0638381db3097f0e54d5ba48814eac9720571de86b5848f3d5a493cf4fe8"} Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.839955 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" event={"ID":"e1c4e5a9-733a-471f-b6f0-dfaced935ba8","Type":"ContainerStarted","Data":"67d1544cec51c8fe8d21e4ce0944b703550717be4bd9ce7ab122c9832a700524"} Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.839976 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" event={"ID":"e1c4e5a9-733a-471f-b6f0-dfaced935ba8","Type":"ContainerStarted","Data":"4e8480d9a8a216e8161bc064d74680140a10d28f1b05b7f01dc526907628f176"} Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.840010 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" event={"ID":"e1c4e5a9-733a-471f-b6f0-dfaced935ba8","Type":"ContainerStarted","Data":"4d0ae581e9fc6532e1657f561f13d594946aac89010ef923f3110093ca1b50bb"} Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.840029 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" event={"ID":"e1c4e5a9-733a-471f-b6f0-dfaced935ba8","Type":"ContainerStarted","Data":"2c460ecc5ee923d185515741b8f7b3192a805784e0bc62db5689f382089a838d"} Jan 07 10:00:50 crc kubenswrapper[5131]: I0107 10:00:50.840046 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" event={"ID":"e1c4e5a9-733a-471f-b6f0-dfaced935ba8","Type":"ContainerStarted","Data":"9c5b761fae47fd5000537cd900382e67350e3c6dcbfa7a867d7b79b78841d4b7"} Jan 07 10:00:51 crc kubenswrapper[5131]: I0107 10:00:51.852622 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"7b5fd7c41683ca17dd95a35646c53ce725c855bc5bff2a2030ae596afb470eb2"} Jan 07 10:00:53 crc kubenswrapper[5131]: I0107 10:00:53.884403 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" event={"ID":"e1c4e5a9-733a-471f-b6f0-dfaced935ba8","Type":"ContainerStarted","Data":"02a081206cf3e5f0895bf0e8600df0fcf4a4a12efd05cd2cbd899bdcb114e8b3"} Jan 07 10:00:56 crc kubenswrapper[5131]: I0107 10:00:56.912684 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" event={"ID":"e1c4e5a9-733a-471f-b6f0-dfaced935ba8","Type":"ContainerStarted","Data":"78060e05575c84fcc11008661c8a3c111873ca9f6efe1c853134a82eaa4a7e45"} Jan 07 10:00:56 crc kubenswrapper[5131]: I0107 10:00:56.913516 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:56 crc kubenswrapper[5131]: I0107 10:00:56.955781 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" podStartSLOduration=8.955763695 podStartE2EDuration="8.955763695s" podCreationTimestamp="2026-01-07 10:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:00:56.953014006 +0000 UTC m=+685.119315630" watchObservedRunningTime="2026-01-07 10:00:56.955763695 +0000 UTC m=+685.122065259" Jan 07 10:00:56 crc kubenswrapper[5131]: I0107 10:00:56.962872 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:57 crc kubenswrapper[5131]: I0107 10:00:57.919530 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:57 crc kubenswrapper[5131]: I0107 10:00:57.919899 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:00:57 crc kubenswrapper[5131]: I0107 10:00:57.944385 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:01:29 crc kubenswrapper[5131]: I0107 10:01:29.966798 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zhj6" Jan 07 10:01:56 crc kubenswrapper[5131]: I0107 10:01:56.503199 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgfp5"] Jan 07 10:01:56 crc kubenswrapper[5131]: I0107 10:01:56.504681 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bgfp5" podUID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" containerName="registry-server" containerID="cri-o://296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5" gracePeriod=30 Jan 07 10:01:56 crc kubenswrapper[5131]: I0107 10:01:56.886236 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 10:01:56 crc kubenswrapper[5131]: I0107 10:01:56.908072 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbd5c\" (UniqueName: \"kubernetes.io/projected/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-kube-api-access-wbd5c\") pod \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " Jan 07 10:01:56 crc kubenswrapper[5131]: I0107 10:01:56.908153 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-utilities\") pod \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " Jan 07 10:01:56 crc kubenswrapper[5131]: I0107 10:01:56.908287 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-catalog-content\") pod \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\" (UID: \"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1\") " Jan 07 10:01:56 crc kubenswrapper[5131]: I0107 10:01:56.911006 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-utilities" (OuterVolumeSpecName: "utilities") pod "d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" (UID: "d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:01:56 crc kubenswrapper[5131]: I0107 10:01:56.917038 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-kube-api-access-wbd5c" (OuterVolumeSpecName: "kube-api-access-wbd5c") pod "d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" (UID: "d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1"). InnerVolumeSpecName "kube-api-access-wbd5c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:01:56 crc kubenswrapper[5131]: I0107 10:01:56.922630 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" (UID: "d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.009623 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbd5c\" (UniqueName: \"kubernetes.io/projected/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-kube-api-access-wbd5c\") on node \"crc\" DevicePath \"\"" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.009883 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.009975 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.377495 5131 generic.go:358] "Generic (PLEG): container finished" podID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" containerID="296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5" exitCode=0 Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.377613 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgfp5" event={"ID":"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1","Type":"ContainerDied","Data":"296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5"} Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.377645 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgfp5" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.377673 5131 scope.go:117] "RemoveContainer" containerID="296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.377654 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgfp5" event={"ID":"d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1","Type":"ContainerDied","Data":"a6460152d9ea77915d43d9297d3cbba5884190b8b4882c9a9d6593180c408994"} Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.411414 5131 scope.go:117] "RemoveContainer" containerID="83008bb83257163c34f6ddd6dc16f94c9a4c954250b2b7e90ce28e0b57de0555" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.427930 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgfp5"] Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.434525 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgfp5"] Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.453608 5131 scope.go:117] "RemoveContainer" containerID="4456ff8cf1f24e802046ef4a3e0e3e25a06e65314293465ded2da49bc4528cac" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.471194 5131 scope.go:117] "RemoveContainer" containerID="296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5" Jan 07 10:01:57 crc kubenswrapper[5131]: E0107 10:01:57.471644 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5\": container with ID starting with 296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5 not found: ID does not exist" containerID="296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.471678 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5"} err="failed to get container status \"296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5\": rpc error: code = NotFound desc = could not find container \"296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5\": container with ID starting with 296cfa3b9007799a672f9dab866c99c90d02dc3a3035a8ba568ed19a518ca5a5 not found: ID does not exist" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.471700 5131 scope.go:117] "RemoveContainer" containerID="83008bb83257163c34f6ddd6dc16f94c9a4c954250b2b7e90ce28e0b57de0555" Jan 07 10:01:57 crc kubenswrapper[5131]: E0107 10:01:57.472087 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83008bb83257163c34f6ddd6dc16f94c9a4c954250b2b7e90ce28e0b57de0555\": container with ID starting with 83008bb83257163c34f6ddd6dc16f94c9a4c954250b2b7e90ce28e0b57de0555 not found: ID does not exist" containerID="83008bb83257163c34f6ddd6dc16f94c9a4c954250b2b7e90ce28e0b57de0555" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.472110 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83008bb83257163c34f6ddd6dc16f94c9a4c954250b2b7e90ce28e0b57de0555"} err="failed to get container status \"83008bb83257163c34f6ddd6dc16f94c9a4c954250b2b7e90ce28e0b57de0555\": rpc error: code = NotFound desc = could not find container \"83008bb83257163c34f6ddd6dc16f94c9a4c954250b2b7e90ce28e0b57de0555\": container with ID starting with 83008bb83257163c34f6ddd6dc16f94c9a4c954250b2b7e90ce28e0b57de0555 not found: ID does not exist" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.472126 5131 scope.go:117] "RemoveContainer" containerID="4456ff8cf1f24e802046ef4a3e0e3e25a06e65314293465ded2da49bc4528cac" Jan 07 10:01:57 crc kubenswrapper[5131]: E0107 10:01:57.472520 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4456ff8cf1f24e802046ef4a3e0e3e25a06e65314293465ded2da49bc4528cac\": container with ID starting with 4456ff8cf1f24e802046ef4a3e0e3e25a06e65314293465ded2da49bc4528cac not found: ID does not exist" containerID="4456ff8cf1f24e802046ef4a3e0e3e25a06e65314293465ded2da49bc4528cac" Jan 07 10:01:57 crc kubenswrapper[5131]: I0107 10:01:57.472668 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4456ff8cf1f24e802046ef4a3e0e3e25a06e65314293465ded2da49bc4528cac"} err="failed to get container status \"4456ff8cf1f24e802046ef4a3e0e3e25a06e65314293465ded2da49bc4528cac\": rpc error: code = NotFound desc = could not find container \"4456ff8cf1f24e802046ef4a3e0e3e25a06e65314293465ded2da49bc4528cac\": container with ID starting with 4456ff8cf1f24e802046ef4a3e0e3e25a06e65314293465ded2da49bc4528cac not found: ID does not exist" Jan 07 10:01:58 crc kubenswrapper[5131]: I0107 10:01:58.190076 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" path="/var/lib/kubelet/pods/d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1/volumes" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.133404 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463002-5t29p"] Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.134070 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" containerName="extract-content" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.134091 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" containerName="extract-content" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.134122 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" containerName="extract-utilities" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.134129 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" containerName="extract-utilities" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.134156 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" containerName="registry-server" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.134163 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" containerName="registry-server" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.134263 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="d2e7ccee-a017-4cfa-8d6e-4c56c68e31c1" containerName="registry-server" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.146791 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463002-5t29p"] Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.146986 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463002-5t29p" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.153531 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.153580 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.154069 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.253266 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frs2z\" (UniqueName: \"kubernetes.io/projected/641ea204-fc48-4482-bf9e-1d45e8b8e7c7-kube-api-access-frs2z\") pod \"auto-csr-approver-29463002-5t29p\" (UID: \"641ea204-fc48-4482-bf9e-1d45e8b8e7c7\") " pod="openshift-infra/auto-csr-approver-29463002-5t29p" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.354529 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-frs2z\" (UniqueName: \"kubernetes.io/projected/641ea204-fc48-4482-bf9e-1d45e8b8e7c7-kube-api-access-frs2z\") pod \"auto-csr-approver-29463002-5t29p\" (UID: \"641ea204-fc48-4482-bf9e-1d45e8b8e7c7\") " pod="openshift-infra/auto-csr-approver-29463002-5t29p" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.378913 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-frs2z\" (UniqueName: \"kubernetes.io/projected/641ea204-fc48-4482-bf9e-1d45e8b8e7c7-kube-api-access-frs2z\") pod \"auto-csr-approver-29463002-5t29p\" (UID: \"641ea204-fc48-4482-bf9e-1d45e8b8e7c7\") " pod="openshift-infra/auto-csr-approver-29463002-5t29p" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.462552 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463002-5t29p" Jan 07 10:02:00 crc kubenswrapper[5131]: I0107 10:02:00.656210 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463002-5t29p"] Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.188241 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss"] Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.200183 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.203695 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.210193 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss"] Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.281433 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.281636 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz2sl\" (UniqueName: \"kubernetes.io/projected/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-kube-api-access-dz2sl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.281704 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.382759 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.382861 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dz2sl\" (UniqueName: \"kubernetes.io/projected/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-kube-api-access-dz2sl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.383017 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.383748 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.383794 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.403773 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz2sl\" (UniqueName: \"kubernetes.io/projected/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-kube-api-access-dz2sl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.405595 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463002-5t29p" event={"ID":"641ea204-fc48-4482-bf9e-1d45e8b8e7c7","Type":"ContainerStarted","Data":"6f9615f1d2136981700a3433f79d3d22ebf23cdcb63f5731059e196ad747df02"} Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.541265 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:01 crc kubenswrapper[5131]: I0107 10:02:01.826913 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss"] Jan 07 10:02:01 crc kubenswrapper[5131]: W0107 10:02:01.837118 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2193270_1adc_4d1b_b07b_b705d3c0fa2e.slice/crio-68cc67419c09949eeafc2dcf3aa72e6fe71d96ddae5772ec9d8406a5491ce025 WatchSource:0}: Error finding container 68cc67419c09949eeafc2dcf3aa72e6fe71d96ddae5772ec9d8406a5491ce025: Status 404 returned error can't find the container with id 68cc67419c09949eeafc2dcf3aa72e6fe71d96ddae5772ec9d8406a5491ce025 Jan 07 10:02:02 crc kubenswrapper[5131]: I0107 10:02:02.412990 5131 generic.go:358] "Generic (PLEG): container finished" podID="641ea204-fc48-4482-bf9e-1d45e8b8e7c7" containerID="537dfc5f6fabc7eb3fec8b77b6c5aff88389c15c2de6afc009ff2ee054bfe24d" exitCode=0 Jan 07 10:02:02 crc kubenswrapper[5131]: I0107 10:02:02.413200 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463002-5t29p" event={"ID":"641ea204-fc48-4482-bf9e-1d45e8b8e7c7","Type":"ContainerDied","Data":"537dfc5f6fabc7eb3fec8b77b6c5aff88389c15c2de6afc009ff2ee054bfe24d"} Jan 07 10:02:02 crc kubenswrapper[5131]: I0107 10:02:02.414622 5131 generic.go:358] "Generic (PLEG): container finished" podID="e2193270-1adc-4d1b-b07b-b705d3c0fa2e" containerID="922a9cb86d6c1e6800d51471d2310ec8272191336b0bb0ef3725314f8f50f5a5" exitCode=0 Jan 07 10:02:02 crc kubenswrapper[5131]: I0107 10:02:02.414666 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" event={"ID":"e2193270-1adc-4d1b-b07b-b705d3c0fa2e","Type":"ContainerDied","Data":"922a9cb86d6c1e6800d51471d2310ec8272191336b0bb0ef3725314f8f50f5a5"} Jan 07 10:02:02 crc kubenswrapper[5131]: I0107 10:02:02.414696 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" event={"ID":"e2193270-1adc-4d1b-b07b-b705d3c0fa2e","Type":"ContainerStarted","Data":"68cc67419c09949eeafc2dcf3aa72e6fe71d96ddae5772ec9d8406a5491ce025"} Jan 07 10:02:03 crc kubenswrapper[5131]: I0107 10:02:03.735826 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463002-5t29p" Jan 07 10:02:03 crc kubenswrapper[5131]: I0107 10:02:03.815119 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frs2z\" (UniqueName: \"kubernetes.io/projected/641ea204-fc48-4482-bf9e-1d45e8b8e7c7-kube-api-access-frs2z\") pod \"641ea204-fc48-4482-bf9e-1d45e8b8e7c7\" (UID: \"641ea204-fc48-4482-bf9e-1d45e8b8e7c7\") " Jan 07 10:02:03 crc kubenswrapper[5131]: I0107 10:02:03.826065 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/641ea204-fc48-4482-bf9e-1d45e8b8e7c7-kube-api-access-frs2z" (OuterVolumeSpecName: "kube-api-access-frs2z") pod "641ea204-fc48-4482-bf9e-1d45e8b8e7c7" (UID: "641ea204-fc48-4482-bf9e-1d45e8b8e7c7"). InnerVolumeSpecName "kube-api-access-frs2z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:02:03 crc kubenswrapper[5131]: I0107 10:02:03.917175 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-frs2z\" (UniqueName: \"kubernetes.io/projected/641ea204-fc48-4482-bf9e-1d45e8b8e7c7-kube-api-access-frs2z\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.119625 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-btkxt"] Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.121366 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="641ea204-fc48-4482-bf9e-1d45e8b8e7c7" containerName="oc" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.121415 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="641ea204-fc48-4482-bf9e-1d45e8b8e7c7" containerName="oc" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.121605 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="641ea204-fc48-4482-bf9e-1d45e8b8e7c7" containerName="oc" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.127017 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.134980 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btkxt"] Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.221364 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-catalog-content\") pod \"redhat-operators-btkxt\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.221785 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vqrn\" (UniqueName: \"kubernetes.io/projected/35b7eecf-029e-41b4-a582-a87e9c32fa30-kube-api-access-8vqrn\") pod \"redhat-operators-btkxt\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.222093 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-utilities\") pod \"redhat-operators-btkxt\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.323688 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-utilities\") pod \"redhat-operators-btkxt\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.323952 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-catalog-content\") pod \"redhat-operators-btkxt\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.324113 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8vqrn\" (UniqueName: \"kubernetes.io/projected/35b7eecf-029e-41b4-a582-a87e9c32fa30-kube-api-access-8vqrn\") pod \"redhat-operators-btkxt\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.324662 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-utilities\") pod \"redhat-operators-btkxt\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.325132 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-catalog-content\") pod \"redhat-operators-btkxt\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.344631 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vqrn\" (UniqueName: \"kubernetes.io/projected/35b7eecf-029e-41b4-a582-a87e9c32fa30-kube-api-access-8vqrn\") pod \"redhat-operators-btkxt\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.435391 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463002-5t29p" event={"ID":"641ea204-fc48-4482-bf9e-1d45e8b8e7c7","Type":"ContainerDied","Data":"6f9615f1d2136981700a3433f79d3d22ebf23cdcb63f5731059e196ad747df02"} Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.435428 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f9615f1d2136981700a3433f79d3d22ebf23cdcb63f5731059e196ad747df02" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.435487 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463002-5t29p" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.437399 5131 generic.go:358] "Generic (PLEG): container finished" podID="e2193270-1adc-4d1b-b07b-b705d3c0fa2e" containerID="b00646b9525dbc415940494dfe1a995780a1e3fcf82f9d414cfb7606e62ff30d" exitCode=0 Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.437550 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" event={"ID":"e2193270-1adc-4d1b-b07b-b705d3c0fa2e","Type":"ContainerDied","Data":"b00646b9525dbc415940494dfe1a995780a1e3fcf82f9d414cfb7606e62ff30d"} Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.456620 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:04 crc kubenswrapper[5131]: I0107 10:02:04.667557 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btkxt"] Jan 07 10:02:05 crc kubenswrapper[5131]: I0107 10:02:05.457910 5131 generic.go:358] "Generic (PLEG): container finished" podID="e2193270-1adc-4d1b-b07b-b705d3c0fa2e" containerID="1d676fa64e04ef621a5cd0a9c7d9c96f194e79adfb635b704d052e6a8529e631" exitCode=0 Jan 07 10:02:05 crc kubenswrapper[5131]: I0107 10:02:05.457994 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" event={"ID":"e2193270-1adc-4d1b-b07b-b705d3c0fa2e","Type":"ContainerDied","Data":"1d676fa64e04ef621a5cd0a9c7d9c96f194e79adfb635b704d052e6a8529e631"} Jan 07 10:02:05 crc kubenswrapper[5131]: I0107 10:02:05.459364 5131 generic.go:358] "Generic (PLEG): container finished" podID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerID="354107fcd59774e2a99241571a68d3450317cf1d63e51be0ca436140f6595ed6" exitCode=0 Jan 07 10:02:05 crc kubenswrapper[5131]: I0107 10:02:05.459400 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkxt" event={"ID":"35b7eecf-029e-41b4-a582-a87e9c32fa30","Type":"ContainerDied","Data":"354107fcd59774e2a99241571a68d3450317cf1d63e51be0ca436140f6595ed6"} Jan 07 10:02:05 crc kubenswrapper[5131]: I0107 10:02:05.459416 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkxt" event={"ID":"35b7eecf-029e-41b4-a582-a87e9c32fa30","Type":"ContainerStarted","Data":"da011e475897e12177053d7a81f18b8038ab2f7cd547de841dc0be09cf4ac4bb"} Jan 07 10:02:06 crc kubenswrapper[5131]: I0107 10:02:06.773183 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:06 crc kubenswrapper[5131]: I0107 10:02:06.871253 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-util\") pod \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " Jan 07 10:02:06 crc kubenswrapper[5131]: I0107 10:02:06.871309 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-bundle\") pod \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " Jan 07 10:02:06 crc kubenswrapper[5131]: I0107 10:02:06.871431 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz2sl\" (UniqueName: \"kubernetes.io/projected/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-kube-api-access-dz2sl\") pod \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\" (UID: \"e2193270-1adc-4d1b-b07b-b705d3c0fa2e\") " Jan 07 10:02:06 crc kubenswrapper[5131]: I0107 10:02:06.874307 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-bundle" (OuterVolumeSpecName: "bundle") pod "e2193270-1adc-4d1b-b07b-b705d3c0fa2e" (UID: "e2193270-1adc-4d1b-b07b-b705d3c0fa2e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:06 crc kubenswrapper[5131]: I0107 10:02:06.878003 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-kube-api-access-dz2sl" (OuterVolumeSpecName: "kube-api-access-dz2sl") pod "e2193270-1adc-4d1b-b07b-b705d3c0fa2e" (UID: "e2193270-1adc-4d1b-b07b-b705d3c0fa2e"). InnerVolumeSpecName "kube-api-access-dz2sl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:02:06 crc kubenswrapper[5131]: I0107 10:02:06.902199 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-util" (OuterVolumeSpecName: "util") pod "e2193270-1adc-4d1b-b07b-b705d3c0fa2e" (UID: "e2193270-1adc-4d1b-b07b-b705d3c0fa2e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:06 crc kubenswrapper[5131]: I0107 10:02:06.973169 5131 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-util\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:06 crc kubenswrapper[5131]: I0107 10:02:06.973499 5131 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:06 crc kubenswrapper[5131]: I0107 10:02:06.973514 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dz2sl\" (UniqueName: \"kubernetes.io/projected/e2193270-1adc-4d1b-b07b-b705d3c0fa2e-kube-api-access-dz2sl\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:07 crc kubenswrapper[5131]: I0107 10:02:07.477356 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" event={"ID":"e2193270-1adc-4d1b-b07b-b705d3c0fa2e","Type":"ContainerDied","Data":"68cc67419c09949eeafc2dcf3aa72e6fe71d96ddae5772ec9d8406a5491ce025"} Jan 07 10:02:07 crc kubenswrapper[5131]: I0107 10:02:07.477413 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss" Jan 07 10:02:07 crc kubenswrapper[5131]: I0107 10:02:07.477426 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68cc67419c09949eeafc2dcf3aa72e6fe71d96ddae5772ec9d8406a5491ce025" Jan 07 10:02:07 crc kubenswrapper[5131]: I0107 10:02:07.483062 5131 generic.go:358] "Generic (PLEG): container finished" podID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerID="157b224980d0fb08f8e36d15ffcf750d12d1cc6daebb53e3093a48ec49197c4f" exitCode=0 Jan 07 10:02:07 crc kubenswrapper[5131]: I0107 10:02:07.483192 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkxt" event={"ID":"35b7eecf-029e-41b4-a582-a87e9c32fa30","Type":"ContainerDied","Data":"157b224980d0fb08f8e36d15ffcf750d12d1cc6daebb53e3093a48ec49197c4f"} Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.200025 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr"] Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.200929 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2193270-1adc-4d1b-b07b-b705d3c0fa2e" containerName="pull" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.200956 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2193270-1adc-4d1b-b07b-b705d3c0fa2e" containerName="pull" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.200984 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2193270-1adc-4d1b-b07b-b705d3c0fa2e" containerName="util" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.200998 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2193270-1adc-4d1b-b07b-b705d3c0fa2e" containerName="util" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.201028 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2193270-1adc-4d1b-b07b-b705d3c0fa2e" containerName="extract" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.201040 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2193270-1adc-4d1b-b07b-b705d3c0fa2e" containerName="extract" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.201249 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2193270-1adc-4d1b-b07b-b705d3c0fa2e" containerName="extract" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.226160 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr"] Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.226414 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.234002 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.399437 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.399793 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.399893 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6snrg\" (UniqueName: \"kubernetes.io/projected/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-kube-api-access-6snrg\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.497820 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkxt" event={"ID":"35b7eecf-029e-41b4-a582-a87e9c32fa30","Type":"ContainerStarted","Data":"3a478d8e6feab61f92d9f91cef115b3341d82fc52fd6f47d9de88039e5ed7ba4"} Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.501085 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.501135 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.501193 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6snrg\" (UniqueName: \"kubernetes.io/projected/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-kube-api-access-6snrg\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.502067 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.502134 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.527352 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-btkxt" podStartSLOduration=3.520772327 podStartE2EDuration="4.527331896s" podCreationTimestamp="2026-01-07 10:02:04 +0000 UTC" firstStartedPulling="2026-01-07 10:02:05.460591508 +0000 UTC m=+753.626893112" lastFinishedPulling="2026-01-07 10:02:06.467151107 +0000 UTC m=+754.633452681" observedRunningTime="2026-01-07 10:02:08.523425345 +0000 UTC m=+756.689726949" watchObservedRunningTime="2026-01-07 10:02:08.527331896 +0000 UTC m=+756.693633480" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.531037 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6snrg\" (UniqueName: \"kubernetes.io/projected/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-kube-api-access-6snrg\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.555773 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.745986 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr"] Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.946810 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph"] Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.957262 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:08 crc kubenswrapper[5131]: I0107 10:02:08.959356 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph"] Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.113137 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.113201 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgdtz\" (UniqueName: \"kubernetes.io/projected/8c0506be-0968-43b3-bd5d-2a352b0693bf-kube-api-access-bgdtz\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.113268 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.215175 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.215248 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.215279 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgdtz\" (UniqueName: \"kubernetes.io/projected/8c0506be-0968-43b3-bd5d-2a352b0693bf-kube-api-access-bgdtz\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.216006 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.216106 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.237630 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgdtz\" (UniqueName: \"kubernetes.io/projected/8c0506be-0968-43b3-bd5d-2a352b0693bf-kube-api-access-bgdtz\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.332950 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.516617 5131 generic.go:358] "Generic (PLEG): container finished" podID="1a9c62ed-f7ff-4259-bd13-a84f00469f5b" containerID="63974544923af7bbb2dadaa208f7c4e7c34ce6ff9095dbcaca2d2b78e982a652" exitCode=0 Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.516772 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" event={"ID":"1a9c62ed-f7ff-4259-bd13-a84f00469f5b","Type":"ContainerDied","Data":"63974544923af7bbb2dadaa208f7c4e7c34ce6ff9095dbcaca2d2b78e982a652"} Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.516823 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" event={"ID":"1a9c62ed-f7ff-4259-bd13-a84f00469f5b","Type":"ContainerStarted","Data":"029ca8dcbdd0bddbf22ef1781ba7f831b651d2e07bf4ffb6959c9195ea6b9260"} Jan 07 10:02:09 crc kubenswrapper[5131]: I0107 10:02:09.632548 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph"] Jan 07 10:02:10 crc kubenswrapper[5131]: I0107 10:02:10.521936 5131 generic.go:358] "Generic (PLEG): container finished" podID="8c0506be-0968-43b3-bd5d-2a352b0693bf" containerID="3ccce2989eea5bcb76d003fbcb478a67ebd6c13c9193b3a91ae79871537bea15" exitCode=0 Jan 07 10:02:10 crc kubenswrapper[5131]: I0107 10:02:10.522113 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" event={"ID":"8c0506be-0968-43b3-bd5d-2a352b0693bf","Type":"ContainerDied","Data":"3ccce2989eea5bcb76d003fbcb478a67ebd6c13c9193b3a91ae79871537bea15"} Jan 07 10:02:10 crc kubenswrapper[5131]: I0107 10:02:10.522143 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" event={"ID":"8c0506be-0968-43b3-bd5d-2a352b0693bf","Type":"ContainerStarted","Data":"fe2d5518f16e80a8b47b24ffa924adad75f0da61df3af0ac511d0164cac03285"} Jan 07 10:02:11 crc kubenswrapper[5131]: I0107 10:02:11.529066 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" event={"ID":"1a9c62ed-f7ff-4259-bd13-a84f00469f5b","Type":"ContainerStarted","Data":"919ba756b2408fe56a10ba5abf621d902dbf329ca4035632be42d941ded925c7"} Jan 07 10:02:12 crc kubenswrapper[5131]: I0107 10:02:12.544731 5131 generic.go:358] "Generic (PLEG): container finished" podID="1a9c62ed-f7ff-4259-bd13-a84f00469f5b" containerID="919ba756b2408fe56a10ba5abf621d902dbf329ca4035632be42d941ded925c7" exitCode=0 Jan 07 10:02:12 crc kubenswrapper[5131]: I0107 10:02:12.544810 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" event={"ID":"1a9c62ed-f7ff-4259-bd13-a84f00469f5b","Type":"ContainerDied","Data":"919ba756b2408fe56a10ba5abf621d902dbf329ca4035632be42d941ded925c7"} Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.520615 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-krthv"] Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.527938 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.539998 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-krthv"] Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.551980 5131 generic.go:358] "Generic (PLEG): container finished" podID="8c0506be-0968-43b3-bd5d-2a352b0693bf" containerID="2819a2e3d1db48c940f7e8516f26c5432c2d04931c67b78e7774748c7830db84" exitCode=0 Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.552082 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" event={"ID":"8c0506be-0968-43b3-bd5d-2a352b0693bf","Type":"ContainerDied","Data":"2819a2e3d1db48c940f7e8516f26c5432c2d04931c67b78e7774748c7830db84"} Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.554492 5131 generic.go:358] "Generic (PLEG): container finished" podID="1a9c62ed-f7ff-4259-bd13-a84f00469f5b" containerID="ad3159418f137fca9ad2d6961b5583b1d0903ac0f441cbd24c4b4c3cfad9a73e" exitCode=0 Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.554612 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" event={"ID":"1a9c62ed-f7ff-4259-bd13-a84f00469f5b","Type":"ContainerDied","Data":"ad3159418f137fca9ad2d6961b5583b1d0903ac0f441cbd24c4b4c3cfad9a73e"} Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.577509 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-utilities\") pod \"certified-operators-krthv\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.577627 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfnr7\" (UniqueName: \"kubernetes.io/projected/1c927899-39c7-4652-b0da-0af81b334878-kube-api-access-gfnr7\") pod \"certified-operators-krthv\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.577691 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-catalog-content\") pod \"certified-operators-krthv\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.678539 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gfnr7\" (UniqueName: \"kubernetes.io/projected/1c927899-39c7-4652-b0da-0af81b334878-kube-api-access-gfnr7\") pod \"certified-operators-krthv\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.679090 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-catalog-content\") pod \"certified-operators-krthv\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.679198 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-utilities\") pod \"certified-operators-krthv\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.679603 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-utilities\") pod \"certified-operators-krthv\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.679886 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-catalog-content\") pod \"certified-operators-krthv\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.701366 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfnr7\" (UniqueName: \"kubernetes.io/projected/1c927899-39c7-4652-b0da-0af81b334878-kube-api-access-gfnr7\") pod \"certified-operators-krthv\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:13 crc kubenswrapper[5131]: I0107 10:02:13.841175 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:14 crc kubenswrapper[5131]: I0107 10:02:14.394287 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-krthv"] Jan 07 10:02:14 crc kubenswrapper[5131]: W0107 10:02:14.425926 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c927899_39c7_4652_b0da_0af81b334878.slice/crio-05e00bea4dfa122412ac4fb8626ebe6a48dba900a575460bec4d9215bf28eb24 WatchSource:0}: Error finding container 05e00bea4dfa122412ac4fb8626ebe6a48dba900a575460bec4d9215bf28eb24: Status 404 returned error can't find the container with id 05e00bea4dfa122412ac4fb8626ebe6a48dba900a575460bec4d9215bf28eb24 Jan 07 10:02:14 crc kubenswrapper[5131]: I0107 10:02:14.457185 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:14 crc kubenswrapper[5131]: I0107 10:02:14.457245 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:14 crc kubenswrapper[5131]: I0107 10:02:14.564254 5131 generic.go:358] "Generic (PLEG): container finished" podID="8c0506be-0968-43b3-bd5d-2a352b0693bf" containerID="7628673e8a5c9b0983173d2253dbbd581edfd89b2f3aaadcadb68fb617b693b9" exitCode=0 Jan 07 10:02:14 crc kubenswrapper[5131]: I0107 10:02:14.564312 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" event={"ID":"8c0506be-0968-43b3-bd5d-2a352b0693bf","Type":"ContainerDied","Data":"7628673e8a5c9b0983173d2253dbbd581edfd89b2f3aaadcadb68fb617b693b9"} Jan 07 10:02:14 crc kubenswrapper[5131]: I0107 10:02:14.566931 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krthv" event={"ID":"1c927899-39c7-4652-b0da-0af81b334878","Type":"ContainerStarted","Data":"05e00bea4dfa122412ac4fb8626ebe6a48dba900a575460bec4d9215bf28eb24"} Jan 07 10:02:14 crc kubenswrapper[5131]: I0107 10:02:14.937889 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:14 crc kubenswrapper[5131]: I0107 10:02:14.995882 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6snrg\" (UniqueName: \"kubernetes.io/projected/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-kube-api-access-6snrg\") pod \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " Jan 07 10:02:14 crc kubenswrapper[5131]: I0107 10:02:14.996067 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-bundle\") pod \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " Jan 07 10:02:14 crc kubenswrapper[5131]: I0107 10:02:14.996089 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-util\") pod \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\" (UID: \"1a9c62ed-f7ff-4259-bd13-a84f00469f5b\") " Jan 07 10:02:14 crc kubenswrapper[5131]: I0107 10:02:14.996977 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-bundle" (OuterVolumeSpecName: "bundle") pod "1a9c62ed-f7ff-4259-bd13-a84f00469f5b" (UID: "1a9c62ed-f7ff-4259-bd13-a84f00469f5b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.001301 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-kube-api-access-6snrg" (OuterVolumeSpecName: "kube-api-access-6snrg") pod "1a9c62ed-f7ff-4259-bd13-a84f00469f5b" (UID: "1a9c62ed-f7ff-4259-bd13-a84f00469f5b"). InnerVolumeSpecName "kube-api-access-6snrg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.004554 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-util" (OuterVolumeSpecName: "util") pod "1a9c62ed-f7ff-4259-bd13-a84f00469f5b" (UID: "1a9c62ed-f7ff-4259-bd13-a84f00469f5b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.097250 5131 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.097284 5131 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-util\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.097295 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6snrg\" (UniqueName: \"kubernetes.io/projected/1a9c62ed-f7ff-4259-bd13-a84f00469f5b-kube-api-access-6snrg\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.518289 5131 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-btkxt" podUID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerName="registry-server" probeResult="failure" output=< Jan 07 10:02:15 crc kubenswrapper[5131]: timeout: failed to connect service ":50051" within 1s Jan 07 10:02:15 crc kubenswrapper[5131]: > Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.572828 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" event={"ID":"1a9c62ed-f7ff-4259-bd13-a84f00469f5b","Type":"ContainerDied","Data":"029ca8dcbdd0bddbf22ef1781ba7f831b651d2e07bf4ffb6959c9195ea6b9260"} Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.572874 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="029ca8dcbdd0bddbf22ef1781ba7f831b651d2e07bf4ffb6959c9195ea6b9260" Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.572957 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr" Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.575132 5131 generic.go:358] "Generic (PLEG): container finished" podID="1c927899-39c7-4652-b0da-0af81b334878" containerID="4e038eb66c67fc4039e5e0dbdb1c8a08f925ea366ba49a343c688bf94a45e7f6" exitCode=0 Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.575208 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krthv" event={"ID":"1c927899-39c7-4652-b0da-0af81b334878","Type":"ContainerDied","Data":"4e038eb66c67fc4039e5e0dbdb1c8a08f925ea366ba49a343c688bf94a45e7f6"} Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.805436 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.905710 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-bundle\") pod \"8c0506be-0968-43b3-bd5d-2a352b0693bf\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.905820 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgdtz\" (UniqueName: \"kubernetes.io/projected/8c0506be-0968-43b3-bd5d-2a352b0693bf-kube-api-access-bgdtz\") pod \"8c0506be-0968-43b3-bd5d-2a352b0693bf\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.905899 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-util\") pod \"8c0506be-0968-43b3-bd5d-2a352b0693bf\" (UID: \"8c0506be-0968-43b3-bd5d-2a352b0693bf\") " Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.908359 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-bundle" (OuterVolumeSpecName: "bundle") pod "8c0506be-0968-43b3-bd5d-2a352b0693bf" (UID: "8c0506be-0968-43b3-bd5d-2a352b0693bf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.913458 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-util" (OuterVolumeSpecName: "util") pod "8c0506be-0968-43b3-bd5d-2a352b0693bf" (UID: "8c0506be-0968-43b3-bd5d-2a352b0693bf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:15 crc kubenswrapper[5131]: I0107 10:02:15.914520 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c0506be-0968-43b3-bd5d-2a352b0693bf-kube-api-access-bgdtz" (OuterVolumeSpecName: "kube-api-access-bgdtz") pod "8c0506be-0968-43b3-bd5d-2a352b0693bf" (UID: "8c0506be-0968-43b3-bd5d-2a352b0693bf"). InnerVolumeSpecName "kube-api-access-bgdtz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:02:16 crc kubenswrapper[5131]: I0107 10:02:16.006939 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bgdtz\" (UniqueName: \"kubernetes.io/projected/8c0506be-0968-43b3-bd5d-2a352b0693bf-kube-api-access-bgdtz\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:16 crc kubenswrapper[5131]: I0107 10:02:16.006966 5131 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-util\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:16 crc kubenswrapper[5131]: I0107 10:02:16.006974 5131 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c0506be-0968-43b3-bd5d-2a352b0693bf-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:16 crc kubenswrapper[5131]: I0107 10:02:16.584236 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" event={"ID":"8c0506be-0968-43b3-bd5d-2a352b0693bf","Type":"ContainerDied","Data":"fe2d5518f16e80a8b47b24ffa924adad75f0da61df3af0ac511d0164cac03285"} Jan 07 10:02:16 crc kubenswrapper[5131]: I0107 10:02:16.584532 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe2d5518f16e80a8b47b24ffa924adad75f0da61df3af0ac511d0164cac03285" Jan 07 10:02:16 crc kubenswrapper[5131]: I0107 10:02:16.584304 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.013761 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g"] Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014409 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c0506be-0968-43b3-bd5d-2a352b0693bf" containerName="util" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014426 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c0506be-0968-43b3-bd5d-2a352b0693bf" containerName="util" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014440 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a9c62ed-f7ff-4259-bd13-a84f00469f5b" containerName="util" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014632 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9c62ed-f7ff-4259-bd13-a84f00469f5b" containerName="util" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014641 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c0506be-0968-43b3-bd5d-2a352b0693bf" containerName="pull" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014647 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c0506be-0968-43b3-bd5d-2a352b0693bf" containerName="pull" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014659 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a9c62ed-f7ff-4259-bd13-a84f00469f5b" containerName="pull" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014664 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9c62ed-f7ff-4259-bd13-a84f00469f5b" containerName="pull" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014680 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a9c62ed-f7ff-4259-bd13-a84f00469f5b" containerName="extract" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014685 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9c62ed-f7ff-4259-bd13-a84f00469f5b" containerName="extract" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014694 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c0506be-0968-43b3-bd5d-2a352b0693bf" containerName="extract" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014699 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c0506be-0968-43b3-bd5d-2a352b0693bf" containerName="extract" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014790 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a9c62ed-f7ff-4259-bd13-a84f00469f5b" containerName="extract" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.014798 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="8c0506be-0968-43b3-bd5d-2a352b0693bf" containerName="extract" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.147690 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g"] Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.147893 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.151272 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.223103 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wld9\" (UniqueName: \"kubernetes.io/projected/b70cf65d-ed30-49ae-b590-19a7e38dfae7-kube-api-access-6wld9\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.223145 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.223286 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.324559 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wld9\" (UniqueName: \"kubernetes.io/projected/b70cf65d-ed30-49ae-b590-19a7e38dfae7-kube-api-access-6wld9\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.324890 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.324925 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.325372 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.325494 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.346735 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wld9\" (UniqueName: \"kubernetes.io/projected/b70cf65d-ed30-49ae-b590-19a7e38dfae7-kube-api-access-6wld9\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.459846 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.570063 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7jtf6"] Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.606043 5131 generic.go:358] "Generic (PLEG): container finished" podID="1c927899-39c7-4652-b0da-0af81b334878" containerID="1b002f582b6aed5093cc1933383827219f4f23b9fa5fe8e0013ab29756805ef6" exitCode=0 Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.670176 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7jtf6"] Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.670263 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krthv" event={"ID":"1c927899-39c7-4652-b0da-0af81b334878","Type":"ContainerDied","Data":"1b002f582b6aed5093cc1933383827219f4f23b9fa5fe8e0013ab29756805ef6"} Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.670503 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7jtf6" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.673396 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.674099 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-spq94\"" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.674240 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.687038 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2"] Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.737518 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmdl6\" (UniqueName: \"kubernetes.io/projected/c8e50a15-61cb-4e8a-aa55-f77f526b5a0d-kube-api-access-lmdl6\") pod \"obo-prometheus-operator-9bc85b4bf-7jtf6\" (UID: \"c8e50a15-61cb-4e8a-aa55-f77f526b5a0d\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7jtf6" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.761526 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g"] Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.761573 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2"] Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.761644 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.765229 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk"] Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.767399 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-6tsqj\"" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.767452 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.773171 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk"] Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.773294 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.844853 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d36a675-e1c1-4c4e-9713-b9a91a58a13c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2\" (UID: \"5d36a675-e1c1-4c4e-9713-b9a91a58a13c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.844921 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/83eb167b-dda6-4f17-b5be-fff07421691b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk\" (UID: \"83eb167b-dda6-4f17-b5be-fff07421691b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.844947 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmdl6\" (UniqueName: \"kubernetes.io/projected/c8e50a15-61cb-4e8a-aa55-f77f526b5a0d-kube-api-access-lmdl6\") pod \"obo-prometheus-operator-9bc85b4bf-7jtf6\" (UID: \"c8e50a15-61cb-4e8a-aa55-f77f526b5a0d\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7jtf6" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.844969 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d36a675-e1c1-4c4e-9713-b9a91a58a13c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2\" (UID: \"5d36a675-e1c1-4c4e-9713-b9a91a58a13c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.845002 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/83eb167b-dda6-4f17-b5be-fff07421691b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk\" (UID: \"83eb167b-dda6-4f17-b5be-fff07421691b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.878615 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmdl6\" (UniqueName: \"kubernetes.io/projected/c8e50a15-61cb-4e8a-aa55-f77f526b5a0d-kube-api-access-lmdl6\") pod \"obo-prometheus-operator-9bc85b4bf-7jtf6\" (UID: \"c8e50a15-61cb-4e8a-aa55-f77f526b5a0d\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7jtf6" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.894404 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-mdzv4"] Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.945989 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/83eb167b-dda6-4f17-b5be-fff07421691b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk\" (UID: \"83eb167b-dda6-4f17-b5be-fff07421691b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.946956 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d36a675-e1c1-4c4e-9713-b9a91a58a13c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2\" (UID: \"5d36a675-e1c1-4c4e-9713-b9a91a58a13c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.947112 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/83eb167b-dda6-4f17-b5be-fff07421691b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk\" (UID: \"83eb167b-dda6-4f17-b5be-fff07421691b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.947325 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d36a675-e1c1-4c4e-9713-b9a91a58a13c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2\" (UID: \"5d36a675-e1c1-4c4e-9713-b9a91a58a13c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.950473 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/83eb167b-dda6-4f17-b5be-fff07421691b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk\" (UID: \"83eb167b-dda6-4f17-b5be-fff07421691b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.950514 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d36a675-e1c1-4c4e-9713-b9a91a58a13c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2\" (UID: \"5d36a675-e1c1-4c4e-9713-b9a91a58a13c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.950514 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d36a675-e1c1-4c4e-9713-b9a91a58a13c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2\" (UID: \"5d36a675-e1c1-4c4e-9713-b9a91a58a13c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.950485 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/83eb167b-dda6-4f17-b5be-fff07421691b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk\" (UID: \"83eb167b-dda6-4f17-b5be-fff07421691b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk" Jan 07 10:02:17 crc kubenswrapper[5131]: I0107 10:02:17.996898 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7jtf6" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.159032 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.166985 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk" Jan 07 10:02:18 crc kubenswrapper[5131]: W0107 10:02:18.470462 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8e50a15_61cb_4e8a_aa55_f77f526b5a0d.slice/crio-c48ece6c390457c7097fc12268c001adf94d912e6692326a66493c786fbf0ef5 WatchSource:0}: Error finding container c48ece6c390457c7097fc12268c001adf94d912e6692326a66493c786fbf0ef5: Status 404 returned error can't find the container with id c48ece6c390457c7097fc12268c001adf94d912e6692326a66493c786fbf0ef5 Jan 07 10:02:18 crc kubenswrapper[5131]: W0107 10:02:18.637880 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83eb167b_dda6_4f17_b5be_fff07421691b.slice/crio-4cac2429df3b7cbaa61d57e4b452f60915251eb08f69129d2d3456809e75a20e WatchSource:0}: Error finding container 4cac2429df3b7cbaa61d57e4b452f60915251eb08f69129d2d3456809e75a20e: Status 404 returned error can't find the container with id 4cac2429df3b7cbaa61d57e4b452f60915251eb08f69129d2d3456809e75a20e Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.673387 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-mdzv4" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.676528 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-2925j\"" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.676732 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.689362 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7jtf6" event={"ID":"c8e50a15-61cb-4e8a-aa55-f77f526b5a0d","Type":"ContainerStarted","Data":"c48ece6c390457c7097fc12268c001adf94d912e6692326a66493c786fbf0ef5"} Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.689401 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-mdzv4"] Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.689416 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-prwfx"] Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.693382 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-prwfx"] Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.693414 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2"] Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.693430 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" event={"ID":"b70cf65d-ed30-49ae-b590-19a7e38dfae7","Type":"ContainerStarted","Data":"22882eb323b4cf8858add3fb763fe912b9a07be935beca764a779c9ef78e0b66"} Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.693451 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7jtf6"] Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.693464 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2" event={"ID":"5d36a675-e1c1-4c4e-9713-b9a91a58a13c","Type":"ContainerStarted","Data":"a52987749d02d53873536f9a8fbf6751b7b19bead1cb27c560cbec5c519110ad"} Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.693476 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk"] Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.693517 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-prwfx" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.695778 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-w4bmq\"" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.757156 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/991d7296-8f72-4c27-9a9c-de2becfb27dd-observability-operator-tls\") pod \"observability-operator-85c68dddb-mdzv4\" (UID: \"991d7296-8f72-4c27-9a9c-de2becfb27dd\") " pod="openshift-operators/observability-operator-85c68dddb-mdzv4" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.757225 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvdj\" (UniqueName: \"kubernetes.io/projected/4f2a238b-1c94-4943-8e48-8f6d69d3d975-kube-api-access-xlvdj\") pod \"perses-operator-669c9f96b5-prwfx\" (UID: \"4f2a238b-1c94-4943-8e48-8f6d69d3d975\") " pod="openshift-operators/perses-operator-669c9f96b5-prwfx" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.757251 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f2a238b-1c94-4943-8e48-8f6d69d3d975-openshift-service-ca\") pod \"perses-operator-669c9f96b5-prwfx\" (UID: \"4f2a238b-1c94-4943-8e48-8f6d69d3d975\") " pod="openshift-operators/perses-operator-669c9f96b5-prwfx" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.757364 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjtrl\" (UniqueName: \"kubernetes.io/projected/991d7296-8f72-4c27-9a9c-de2becfb27dd-kube-api-access-pjtrl\") pod \"observability-operator-85c68dddb-mdzv4\" (UID: \"991d7296-8f72-4c27-9a9c-de2becfb27dd\") " pod="openshift-operators/observability-operator-85c68dddb-mdzv4" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.858717 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pjtrl\" (UniqueName: \"kubernetes.io/projected/991d7296-8f72-4c27-9a9c-de2becfb27dd-kube-api-access-pjtrl\") pod \"observability-operator-85c68dddb-mdzv4\" (UID: \"991d7296-8f72-4c27-9a9c-de2becfb27dd\") " pod="openshift-operators/observability-operator-85c68dddb-mdzv4" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.859711 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/991d7296-8f72-4c27-9a9c-de2becfb27dd-observability-operator-tls\") pod \"observability-operator-85c68dddb-mdzv4\" (UID: \"991d7296-8f72-4c27-9a9c-de2becfb27dd\") " pod="openshift-operators/observability-operator-85c68dddb-mdzv4" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.859770 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xlvdj\" (UniqueName: \"kubernetes.io/projected/4f2a238b-1c94-4943-8e48-8f6d69d3d975-kube-api-access-xlvdj\") pod \"perses-operator-669c9f96b5-prwfx\" (UID: \"4f2a238b-1c94-4943-8e48-8f6d69d3d975\") " pod="openshift-operators/perses-operator-669c9f96b5-prwfx" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.859863 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f2a238b-1c94-4943-8e48-8f6d69d3d975-openshift-service-ca\") pod \"perses-operator-669c9f96b5-prwfx\" (UID: \"4f2a238b-1c94-4943-8e48-8f6d69d3d975\") " pod="openshift-operators/perses-operator-669c9f96b5-prwfx" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.860998 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f2a238b-1c94-4943-8e48-8f6d69d3d975-openshift-service-ca\") pod \"perses-operator-669c9f96b5-prwfx\" (UID: \"4f2a238b-1c94-4943-8e48-8f6d69d3d975\") " pod="openshift-operators/perses-operator-669c9f96b5-prwfx" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.868153 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/991d7296-8f72-4c27-9a9c-de2becfb27dd-observability-operator-tls\") pod \"observability-operator-85c68dddb-mdzv4\" (UID: \"991d7296-8f72-4c27-9a9c-de2becfb27dd\") " pod="openshift-operators/observability-operator-85c68dddb-mdzv4" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.880464 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjtrl\" (UniqueName: \"kubernetes.io/projected/991d7296-8f72-4c27-9a9c-de2becfb27dd-kube-api-access-pjtrl\") pod \"observability-operator-85c68dddb-mdzv4\" (UID: \"991d7296-8f72-4c27-9a9c-de2becfb27dd\") " pod="openshift-operators/observability-operator-85c68dddb-mdzv4" Jan 07 10:02:18 crc kubenswrapper[5131]: I0107 10:02:18.885749 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlvdj\" (UniqueName: \"kubernetes.io/projected/4f2a238b-1c94-4943-8e48-8f6d69d3d975-kube-api-access-xlvdj\") pod \"perses-operator-669c9f96b5-prwfx\" (UID: \"4f2a238b-1c94-4943-8e48-8f6d69d3d975\") " pod="openshift-operators/perses-operator-669c9f96b5-prwfx" Jan 07 10:02:19 crc kubenswrapper[5131]: I0107 10:02:19.006729 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-mdzv4" Jan 07 10:02:19 crc kubenswrapper[5131]: I0107 10:02:19.019466 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-prwfx" Jan 07 10:02:19 crc kubenswrapper[5131]: I0107 10:02:19.286045 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-prwfx"] Jan 07 10:02:19 crc kubenswrapper[5131]: W0107 10:02:19.301674 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f2a238b_1c94_4943_8e48_8f6d69d3d975.slice/crio-8dabe8b666bd5d40f582a30e759ab054250fbd28591dcd66ee4189850173d174 WatchSource:0}: Error finding container 8dabe8b666bd5d40f582a30e759ab054250fbd28591dcd66ee4189850173d174: Status 404 returned error can't find the container with id 8dabe8b666bd5d40f582a30e759ab054250fbd28591dcd66ee4189850173d174 Jan 07 10:02:19 crc kubenswrapper[5131]: I0107 10:02:19.579258 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-mdzv4"] Jan 07 10:02:19 crc kubenswrapper[5131]: W0107 10:02:19.611140 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod991d7296_8f72_4c27_9a9c_de2becfb27dd.slice/crio-43236813ee245e3472156b865e257e0fd3e6cdbd7ce65477008630f527b40b0d WatchSource:0}: Error finding container 43236813ee245e3472156b865e257e0fd3e6cdbd7ce65477008630f527b40b0d: Status 404 returned error can't find the container with id 43236813ee245e3472156b865e257e0fd3e6cdbd7ce65477008630f527b40b0d Jan 07 10:02:19 crc kubenswrapper[5131]: I0107 10:02:19.629694 5131 generic.go:358] "Generic (PLEG): container finished" podID="b70cf65d-ed30-49ae-b590-19a7e38dfae7" containerID="1d683fa33384750dcbe3c96e27362e2c465a210bc8e187ae90c9c28eba110acf" exitCode=0 Jan 07 10:02:19 crc kubenswrapper[5131]: I0107 10:02:19.629769 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" event={"ID":"b70cf65d-ed30-49ae-b590-19a7e38dfae7","Type":"ContainerDied","Data":"1d683fa33384750dcbe3c96e27362e2c465a210bc8e187ae90c9c28eba110acf"} Jan 07 10:02:19 crc kubenswrapper[5131]: I0107 10:02:19.631209 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-prwfx" event={"ID":"4f2a238b-1c94-4943-8e48-8f6d69d3d975","Type":"ContainerStarted","Data":"8dabe8b666bd5d40f582a30e759ab054250fbd28591dcd66ee4189850173d174"} Jan 07 10:02:19 crc kubenswrapper[5131]: I0107 10:02:19.635039 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krthv" event={"ID":"1c927899-39c7-4652-b0da-0af81b334878","Type":"ContainerStarted","Data":"e87c726d5a75c831bd55333316cda006898eba821858f73627eb9514d1ae9ff5"} Jan 07 10:02:19 crc kubenswrapper[5131]: I0107 10:02:19.636867 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-mdzv4" event={"ID":"991d7296-8f72-4c27-9a9c-de2becfb27dd","Type":"ContainerStarted","Data":"43236813ee245e3472156b865e257e0fd3e6cdbd7ce65477008630f527b40b0d"} Jan 07 10:02:19 crc kubenswrapper[5131]: I0107 10:02:19.640436 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk" event={"ID":"83eb167b-dda6-4f17-b5be-fff07421691b","Type":"ContainerStarted","Data":"4cac2429df3b7cbaa61d57e4b452f60915251eb08f69129d2d3456809e75a20e"} Jan 07 10:02:19 crc kubenswrapper[5131]: I0107 10:02:19.661686 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-krthv" podStartSLOduration=5.570696622 podStartE2EDuration="6.661668598s" podCreationTimestamp="2026-01-07 10:02:13 +0000 UTC" firstStartedPulling="2026-01-07 10:02:15.575913201 +0000 UTC m=+763.742214765" lastFinishedPulling="2026-01-07 10:02:16.666885177 +0000 UTC m=+764.833186741" observedRunningTime="2026-01-07 10:02:19.660004329 +0000 UTC m=+767.826305893" watchObservedRunningTime="2026-01-07 10:02:19.661668598 +0000 UTC m=+767.827970162" Jan 07 10:02:23 crc kubenswrapper[5131]: I0107 10:02:23.403812 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-76d7b8b7dc-5fqz8"] Jan 07 10:02:23 crc kubenswrapper[5131]: I0107 10:02:23.970514 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:23 crc kubenswrapper[5131]: I0107 10:02:23.970664 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:23 crc kubenswrapper[5131]: I0107 10:02:23.970682 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-76d7b8b7dc-5fqz8"] Jan 07 10:02:23 crc kubenswrapper[5131]: I0107 10:02:23.970701 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:23 crc kubenswrapper[5131]: I0107 10:02:23.971080 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" Jan 07 10:02:23 crc kubenswrapper[5131]: I0107 10:02:23.974226 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-mktmg\"" Jan 07 10:02:23 crc kubenswrapper[5131]: I0107 10:02:23.974680 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 07 10:02:23 crc kubenswrapper[5131]: I0107 10:02:23.974899 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 07 10:02:23 crc kubenswrapper[5131]: I0107 10:02:23.975246 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.040317 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.048412 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e4df826-6486-4a4c-8ec4-19c57429f9de-apiservice-cert\") pod \"elastic-operator-76d7b8b7dc-5fqz8\" (UID: \"9e4df826-6486-4a4c-8ec4-19c57429f9de\") " pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.048444 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx97k\" (UniqueName: \"kubernetes.io/projected/9e4df826-6486-4a4c-8ec4-19c57429f9de-kube-api-access-hx97k\") pod \"elastic-operator-76d7b8b7dc-5fqz8\" (UID: \"9e4df826-6486-4a4c-8ec4-19c57429f9de\") " pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.048488 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e4df826-6486-4a4c-8ec4-19c57429f9de-webhook-cert\") pod \"elastic-operator-76d7b8b7dc-5fqz8\" (UID: \"9e4df826-6486-4a4c-8ec4-19c57429f9de\") " pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.149948 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e4df826-6486-4a4c-8ec4-19c57429f9de-apiservice-cert\") pod \"elastic-operator-76d7b8b7dc-5fqz8\" (UID: \"9e4df826-6486-4a4c-8ec4-19c57429f9de\") " pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.150031 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hx97k\" (UniqueName: \"kubernetes.io/projected/9e4df826-6486-4a4c-8ec4-19c57429f9de-kube-api-access-hx97k\") pod \"elastic-operator-76d7b8b7dc-5fqz8\" (UID: \"9e4df826-6486-4a4c-8ec4-19c57429f9de\") " pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.150108 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e4df826-6486-4a4c-8ec4-19c57429f9de-webhook-cert\") pod \"elastic-operator-76d7b8b7dc-5fqz8\" (UID: \"9e4df826-6486-4a4c-8ec4-19c57429f9de\") " pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.157330 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e4df826-6486-4a4c-8ec4-19c57429f9de-apiservice-cert\") pod \"elastic-operator-76d7b8b7dc-5fqz8\" (UID: \"9e4df826-6486-4a4c-8ec4-19c57429f9de\") " pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.157344 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e4df826-6486-4a4c-8ec4-19c57429f9de-webhook-cert\") pod \"elastic-operator-76d7b8b7dc-5fqz8\" (UID: \"9e4df826-6486-4a4c-8ec4-19c57429f9de\") " pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.171273 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx97k\" (UniqueName: \"kubernetes.io/projected/9e4df826-6486-4a4c-8ec4-19c57429f9de-kube-api-access-hx97k\") pod \"elastic-operator-76d7b8b7dc-5fqz8\" (UID: \"9e4df826-6486-4a4c-8ec4-19c57429f9de\") " pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.317470 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.496524 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.538035 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:24 crc kubenswrapper[5131]: I0107 10:02:24.912566 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-krthv"] Jan 07 10:02:25 crc kubenswrapper[5131]: I0107 10:02:25.395574 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-5t75g"] Jan 07 10:02:25 crc kubenswrapper[5131]: I0107 10:02:25.587505 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-5t75g"] Jan 07 10:02:25 crc kubenswrapper[5131]: I0107 10:02:25.587665 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-5t75g" Jan 07 10:02:25 crc kubenswrapper[5131]: I0107 10:02:25.593498 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-hlwz4\"" Jan 07 10:02:25 crc kubenswrapper[5131]: I0107 10:02:25.671172 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5t4x\" (UniqueName: \"kubernetes.io/projected/5f39d740-0ac8-4192-abfb-838e6b227197-kube-api-access-m5t4x\") pod \"interconnect-operator-78b9bd8798-5t75g\" (UID: \"5f39d740-0ac8-4192-abfb-838e6b227197\") " pod="service-telemetry/interconnect-operator-78b9bd8798-5t75g" Jan 07 10:02:25 crc kubenswrapper[5131]: I0107 10:02:25.701341 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-krthv" podUID="1c927899-39c7-4652-b0da-0af81b334878" containerName="registry-server" containerID="cri-o://e87c726d5a75c831bd55333316cda006898eba821858f73627eb9514d1ae9ff5" gracePeriod=2 Jan 07 10:02:25 crc kubenswrapper[5131]: I0107 10:02:25.773036 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m5t4x\" (UniqueName: \"kubernetes.io/projected/5f39d740-0ac8-4192-abfb-838e6b227197-kube-api-access-m5t4x\") pod \"interconnect-operator-78b9bd8798-5t75g\" (UID: \"5f39d740-0ac8-4192-abfb-838e6b227197\") " pod="service-telemetry/interconnect-operator-78b9bd8798-5t75g" Jan 07 10:02:25 crc kubenswrapper[5131]: I0107 10:02:25.790554 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5t4x\" (UniqueName: \"kubernetes.io/projected/5f39d740-0ac8-4192-abfb-838e6b227197-kube-api-access-m5t4x\") pod \"interconnect-operator-78b9bd8798-5t75g\" (UID: \"5f39d740-0ac8-4192-abfb-838e6b227197\") " pod="service-telemetry/interconnect-operator-78b9bd8798-5t75g" Jan 07 10:02:25 crc kubenswrapper[5131]: I0107 10:02:25.906390 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-5t75g" Jan 07 10:02:26 crc kubenswrapper[5131]: I0107 10:02:26.715796 5131 generic.go:358] "Generic (PLEG): container finished" podID="1c927899-39c7-4652-b0da-0af81b334878" containerID="e87c726d5a75c831bd55333316cda006898eba821858f73627eb9514d1ae9ff5" exitCode=0 Jan 07 10:02:26 crc kubenswrapper[5131]: I0107 10:02:26.715942 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krthv" event={"ID":"1c927899-39c7-4652-b0da-0af81b334878","Type":"ContainerDied","Data":"e87c726d5a75c831bd55333316cda006898eba821858f73627eb9514d1ae9ff5"} Jan 07 10:02:29 crc kubenswrapper[5131]: I0107 10:02:29.100971 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-btkxt"] Jan 07 10:02:29 crc kubenswrapper[5131]: I0107 10:02:29.101520 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-btkxt" podUID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerName="registry-server" containerID="cri-o://3a478d8e6feab61f92d9f91cef115b3341d82fc52fd6f47d9de88039e5ed7ba4" gracePeriod=2 Jan 07 10:02:29 crc kubenswrapper[5131]: I0107 10:02:29.733120 5131 generic.go:358] "Generic (PLEG): container finished" podID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerID="3a478d8e6feab61f92d9f91cef115b3341d82fc52fd6f47d9de88039e5ed7ba4" exitCode=0 Jan 07 10:02:29 crc kubenswrapper[5131]: I0107 10:02:29.733220 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkxt" event={"ID":"35b7eecf-029e-41b4-a582-a87e9c32fa30","Type":"ContainerDied","Data":"3a478d8e6feab61f92d9f91cef115b3341d82fc52fd6f47d9de88039e5ed7ba4"} Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.348749 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.415614 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfnr7\" (UniqueName: \"kubernetes.io/projected/1c927899-39c7-4652-b0da-0af81b334878-kube-api-access-gfnr7\") pod \"1c927899-39c7-4652-b0da-0af81b334878\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.415686 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-utilities\") pod \"1c927899-39c7-4652-b0da-0af81b334878\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.415761 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-catalog-content\") pod \"1c927899-39c7-4652-b0da-0af81b334878\" (UID: \"1c927899-39c7-4652-b0da-0af81b334878\") " Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.416756 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-utilities" (OuterVolumeSpecName: "utilities") pod "1c927899-39c7-4652-b0da-0af81b334878" (UID: "1c927899-39c7-4652-b0da-0af81b334878"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.441036 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c927899-39c7-4652-b0da-0af81b334878-kube-api-access-gfnr7" (OuterVolumeSpecName: "kube-api-access-gfnr7") pod "1c927899-39c7-4652-b0da-0af81b334878" (UID: "1c927899-39c7-4652-b0da-0af81b334878"). InnerVolumeSpecName "kube-api-access-gfnr7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.449464 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c927899-39c7-4652-b0da-0af81b334878" (UID: "1c927899-39c7-4652-b0da-0af81b334878"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.517098 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.517128 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gfnr7\" (UniqueName: \"kubernetes.io/projected/1c927899-39c7-4652-b0da-0af81b334878-kube-api-access-gfnr7\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.517139 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c927899-39c7-4652-b0da-0af81b334878-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.623910 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.720546 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vqrn\" (UniqueName: \"kubernetes.io/projected/35b7eecf-029e-41b4-a582-a87e9c32fa30-kube-api-access-8vqrn\") pod \"35b7eecf-029e-41b4-a582-a87e9c32fa30\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.720656 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-catalog-content\") pod \"35b7eecf-029e-41b4-a582-a87e9c32fa30\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.720739 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-utilities\") pod \"35b7eecf-029e-41b4-a582-a87e9c32fa30\" (UID: \"35b7eecf-029e-41b4-a582-a87e9c32fa30\") " Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.722942 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-utilities" (OuterVolumeSpecName: "utilities") pod "35b7eecf-029e-41b4-a582-a87e9c32fa30" (UID: "35b7eecf-029e-41b4-a582-a87e9c32fa30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.727883 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b7eecf-029e-41b4-a582-a87e9c32fa30-kube-api-access-8vqrn" (OuterVolumeSpecName: "kube-api-access-8vqrn") pod "35b7eecf-029e-41b4-a582-a87e9c32fa30" (UID: "35b7eecf-029e-41b4-a582-a87e9c32fa30"). InnerVolumeSpecName "kube-api-access-8vqrn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.773107 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkxt" event={"ID":"35b7eecf-029e-41b4-a582-a87e9c32fa30","Type":"ContainerDied","Data":"da011e475897e12177053d7a81f18b8038ab2f7cd547de841dc0be09cf4ac4bb"} Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.773168 5131 scope.go:117] "RemoveContainer" containerID="3a478d8e6feab61f92d9f91cef115b3341d82fc52fd6f47d9de88039e5ed7ba4" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.773347 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btkxt" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.774468 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-76d7b8b7dc-5fqz8"] Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.780153 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krthv" event={"ID":"1c927899-39c7-4652-b0da-0af81b334878","Type":"ContainerDied","Data":"05e00bea4dfa122412ac4fb8626ebe6a48dba900a575460bec4d9215bf28eb24"} Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.780207 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krthv" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.819020 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-krthv"] Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.822081 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.822115 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8vqrn\" (UniqueName: \"kubernetes.io/projected/35b7eecf-029e-41b4-a582-a87e9c32fa30-kube-api-access-8vqrn\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.822744 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-krthv"] Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.835893 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-5t75g"] Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.860547 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35b7eecf-029e-41b4-a582-a87e9c32fa30" (UID: "35b7eecf-029e-41b4-a582-a87e9c32fa30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.871053 5131 scope.go:117] "RemoveContainer" containerID="157b224980d0fb08f8e36d15ffcf750d12d1cc6daebb53e3093a48ec49197c4f" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.902004 5131 scope.go:117] "RemoveContainer" containerID="354107fcd59774e2a99241571a68d3450317cf1d63e51be0ca436140f6595ed6" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.929746 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b7eecf-029e-41b4-a582-a87e9c32fa30-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:33 crc kubenswrapper[5131]: I0107 10:02:33.967306 5131 scope.go:117] "RemoveContainer" containerID="e87c726d5a75c831bd55333316cda006898eba821858f73627eb9514d1ae9ff5" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.000058 5131 scope.go:117] "RemoveContainer" containerID="1b002f582b6aed5093cc1933383827219f4f23b9fa5fe8e0013ab29756805ef6" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.049018 5131 scope.go:117] "RemoveContainer" containerID="4e038eb66c67fc4039e5e0dbdb1c8a08f925ea366ba49a343c688bf94a45e7f6" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.105984 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-btkxt"] Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.109693 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-btkxt"] Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.186423 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c927899-39c7-4652-b0da-0af81b334878" path="/var/lib/kubelet/pods/1c927899-39c7-4652-b0da-0af81b334878/volumes" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.186999 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35b7eecf-029e-41b4-a582-a87e9c32fa30" path="/var/lib/kubelet/pods/35b7eecf-029e-41b4-a582-a87e9c32fa30/volumes" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.788332 5131 generic.go:358] "Generic (PLEG): container finished" podID="b70cf65d-ed30-49ae-b590-19a7e38dfae7" containerID="531b957df2585e8f7396787b69d0c685ff50a0c8e68eedaf165f2dfb08ee113f" exitCode=0 Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.788388 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" event={"ID":"b70cf65d-ed30-49ae-b590-19a7e38dfae7","Type":"ContainerDied","Data":"531b957df2585e8f7396787b69d0c685ff50a0c8e68eedaf165f2dfb08ee113f"} Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.790768 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2" event={"ID":"5d36a675-e1c1-4c4e-9713-b9a91a58a13c","Type":"ContainerStarted","Data":"98f0772ba444adf5642fa6c92985648b5fa73b67b281cddfba1fafff0b96e42d"} Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.792474 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-prwfx" event={"ID":"4f2a238b-1c94-4943-8e48-8f6d69d3d975","Type":"ContainerStarted","Data":"472ed27bc75b138fc4fc121b1fbc7913718873ddb3ec864d8feb67aabc107b2e"} Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.794435 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-5t75g" event={"ID":"5f39d740-0ac8-4192-abfb-838e6b227197","Type":"ContainerStarted","Data":"f602a01f557a484117c29468ff1ddd5f81433bbbecc49a16413b0a2e07239892"} Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.795278 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-prwfx" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.796608 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7jtf6" event={"ID":"c8e50a15-61cb-4e8a-aa55-f77f526b5a0d","Type":"ContainerStarted","Data":"5b278606ece80c790364c4946012d6119d7ce763178f997d01627e6999de1e11"} Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.798914 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-mdzv4" event={"ID":"991d7296-8f72-4c27-9a9c-de2becfb27dd","Type":"ContainerStarted","Data":"984ef94677581a9892e513353f133290f68cf53c9d0955c0225f3b2374ef2d5f"} Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.800114 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-mdzv4" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.801948 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-mdzv4" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.802432 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" event={"ID":"9e4df826-6486-4a4c-8ec4-19c57429f9de","Type":"ContainerStarted","Data":"0b68283bc6480a70f35b23c218302b26a77e29e36b7ca7e7211c6bdcd2bd63d5"} Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.807464 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk" event={"ID":"83eb167b-dda6-4f17-b5be-fff07421691b","Type":"ContainerStarted","Data":"128c55b65c3ac6bb4f89475001c758b2cc015eb0b7fd41c878799f5d18ebc097"} Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.834529 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-mdzv4" podStartSLOduration=3.8764468389999998 podStartE2EDuration="17.83450882s" podCreationTimestamp="2026-01-07 10:02:17 +0000 UTC" firstStartedPulling="2026-01-07 10:02:19.61214497 +0000 UTC m=+767.778446534" lastFinishedPulling="2026-01-07 10:02:33.570206951 +0000 UTC m=+781.736508515" observedRunningTime="2026-01-07 10:02:34.822208171 +0000 UTC m=+782.988509735" watchObservedRunningTime="2026-01-07 10:02:34.83450882 +0000 UTC m=+783.000810384" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.849676 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-prwfx" podStartSLOduration=2.583506486 podStartE2EDuration="16.849658428s" podCreationTimestamp="2026-01-07 10:02:18 +0000 UTC" firstStartedPulling="2026-01-07 10:02:19.304036269 +0000 UTC m=+767.470337843" lastFinishedPulling="2026-01-07 10:02:33.570188221 +0000 UTC m=+781.736489785" observedRunningTime="2026-01-07 10:02:34.849229748 +0000 UTC m=+783.015531332" watchObservedRunningTime="2026-01-07 10:02:34.849658428 +0000 UTC m=+783.015959992" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.892477 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7jtf6" podStartSLOduration=2.865002219 podStartE2EDuration="17.892460668s" podCreationTimestamp="2026-01-07 10:02:17 +0000 UTC" firstStartedPulling="2026-01-07 10:02:18.47352762 +0000 UTC m=+766.639829184" lastFinishedPulling="2026-01-07 10:02:33.500986059 +0000 UTC m=+781.667287633" observedRunningTime="2026-01-07 10:02:34.875007774 +0000 UTC m=+783.041309358" watchObservedRunningTime="2026-01-07 10:02:34.892460668 +0000 UTC m=+783.058762232" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.894705 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2" podStartSLOduration=2.771967216 podStartE2EDuration="17.894689513s" podCreationTimestamp="2026-01-07 10:02:17 +0000 UTC" firstStartedPulling="2026-01-07 10:02:18.377868693 +0000 UTC m=+766.544170257" lastFinishedPulling="2026-01-07 10:02:33.50059099 +0000 UTC m=+781.666892554" observedRunningTime="2026-01-07 10:02:34.890730686 +0000 UTC m=+783.057032270" watchObservedRunningTime="2026-01-07 10:02:34.894689513 +0000 UTC m=+783.060991097" Jan 07 10:02:34 crc kubenswrapper[5131]: I0107 10:02:34.919046 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk" podStartSLOduration=2.990022649 podStartE2EDuration="17.919027454s" podCreationTimestamp="2026-01-07 10:02:17 +0000 UTC" firstStartedPulling="2026-01-07 10:02:18.640651433 +0000 UTC m=+766.806952997" lastFinishedPulling="2026-01-07 10:02:33.569656238 +0000 UTC m=+781.735957802" observedRunningTime="2026-01-07 10:02:34.914156126 +0000 UTC m=+783.080457710" watchObservedRunningTime="2026-01-07 10:02:34.919027454 +0000 UTC m=+783.085329008" Jan 07 10:02:35 crc kubenswrapper[5131]: I0107 10:02:35.824912 5131 generic.go:358] "Generic (PLEG): container finished" podID="b70cf65d-ed30-49ae-b590-19a7e38dfae7" containerID="fe03a6cd2e1b768075f511b754b7271a83a5d63302f3c39a748032d012020e51" exitCode=0 Jan 07 10:02:35 crc kubenswrapper[5131]: I0107 10:02:35.826073 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" event={"ID":"b70cf65d-ed30-49ae-b590-19a7e38dfae7","Type":"ContainerDied","Data":"fe03a6cd2e1b768075f511b754b7271a83a5d63302f3c39a748032d012020e51"} Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.445222 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.486498 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wld9\" (UniqueName: \"kubernetes.io/projected/b70cf65d-ed30-49ae-b590-19a7e38dfae7-kube-api-access-6wld9\") pod \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.486597 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-bundle\") pod \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.486617 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-util\") pod \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\" (UID: \"b70cf65d-ed30-49ae-b590-19a7e38dfae7\") " Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.488734 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-bundle" (OuterVolumeSpecName: "bundle") pod "b70cf65d-ed30-49ae-b590-19a7e38dfae7" (UID: "b70cf65d-ed30-49ae-b590-19a7e38dfae7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.496003 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-util" (OuterVolumeSpecName: "util") pod "b70cf65d-ed30-49ae-b590-19a7e38dfae7" (UID: "b70cf65d-ed30-49ae-b590-19a7e38dfae7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.507737 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70cf65d-ed30-49ae-b590-19a7e38dfae7-kube-api-access-6wld9" (OuterVolumeSpecName: "kube-api-access-6wld9") pod "b70cf65d-ed30-49ae-b590-19a7e38dfae7" (UID: "b70cf65d-ed30-49ae-b590-19a7e38dfae7"). InnerVolumeSpecName "kube-api-access-6wld9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.588273 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6wld9\" (UniqueName: \"kubernetes.io/projected/b70cf65d-ed30-49ae-b590-19a7e38dfae7-kube-api-access-6wld9\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.588316 5131 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.588353 5131 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b70cf65d-ed30-49ae-b590-19a7e38dfae7-util\") on node \"crc\" DevicePath \"\"" Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.839892 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.839890 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g" event={"ID":"b70cf65d-ed30-49ae-b590-19a7e38dfae7","Type":"ContainerDied","Data":"22882eb323b4cf8858add3fb763fe912b9a07be935beca764a779c9ef78e0b66"} Jan 07 10:02:37 crc kubenswrapper[5131]: I0107 10:02:37.840335 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22882eb323b4cf8858add3fb763fe912b9a07be935beca764a779c9ef78e0b66" Jan 07 10:02:41 crc kubenswrapper[5131]: I0107 10:02:41.867906 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-5t75g" event={"ID":"5f39d740-0ac8-4192-abfb-838e6b227197","Type":"ContainerStarted","Data":"e814b0abf5bb52e7d3e07cc3887c63215a9ab05405dd703676b7e3ab8e25b666"} Jan 07 10:02:41 crc kubenswrapper[5131]: I0107 10:02:41.870164 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" event={"ID":"9e4df826-6486-4a4c-8ec4-19c57429f9de","Type":"ContainerStarted","Data":"c62673940e87b3fbd61a48b54e9b8950ea703bcf98d1904ff7582406b160c5a7"} Jan 07 10:02:41 crc kubenswrapper[5131]: I0107 10:02:41.885901 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-5t75g" podStartSLOduration=9.681024851 podStartE2EDuration="16.885881487s" podCreationTimestamp="2026-01-07 10:02:25 +0000 UTC" firstStartedPulling="2026-01-07 10:02:33.866682065 +0000 UTC m=+782.032983639" lastFinishedPulling="2026-01-07 10:02:41.071538711 +0000 UTC m=+789.237840275" observedRunningTime="2026-01-07 10:02:41.884510354 +0000 UTC m=+790.050811918" watchObservedRunningTime="2026-01-07 10:02:41.885881487 +0000 UTC m=+790.052183041" Jan 07 10:02:41 crc kubenswrapper[5131]: I0107 10:02:41.933376 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-76d7b8b7dc-5fqz8" podStartSLOduration=11.769779907 podStartE2EDuration="18.9333364s" podCreationTimestamp="2026-01-07 10:02:23 +0000 UTC" firstStartedPulling="2026-01-07 10:02:33.869549824 +0000 UTC m=+782.035851388" lastFinishedPulling="2026-01-07 10:02:41.033106317 +0000 UTC m=+789.199407881" observedRunningTime="2026-01-07 10:02:41.927330164 +0000 UTC m=+790.093631728" watchObservedRunningTime="2026-01-07 10:02:41.9333364 +0000 UTC m=+790.099637964" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497063 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497819 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1c927899-39c7-4652-b0da-0af81b334878" containerName="extract-utilities" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497852 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c927899-39c7-4652-b0da-0af81b334878" containerName="extract-utilities" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497865 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1c927899-39c7-4652-b0da-0af81b334878" containerName="extract-content" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497873 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c927899-39c7-4652-b0da-0af81b334878" containerName="extract-content" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497902 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerName="registry-server" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497907 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerName="registry-server" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497918 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerName="extract-content" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497924 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerName="extract-content" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497936 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b70cf65d-ed30-49ae-b590-19a7e38dfae7" containerName="util" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497941 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70cf65d-ed30-49ae-b590-19a7e38dfae7" containerName="util" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497954 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1c927899-39c7-4652-b0da-0af81b334878" containerName="registry-server" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497960 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c927899-39c7-4652-b0da-0af81b334878" containerName="registry-server" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497968 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b70cf65d-ed30-49ae-b590-19a7e38dfae7" containerName="pull" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497973 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70cf65d-ed30-49ae-b590-19a7e38dfae7" containerName="pull" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497982 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b70cf65d-ed30-49ae-b590-19a7e38dfae7" containerName="extract" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497987 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70cf65d-ed30-49ae-b590-19a7e38dfae7" containerName="extract" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497994 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerName="extract-utilities" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.497999 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerName="extract-utilities" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.498090 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="35b7eecf-029e-41b4-a582-a87e9c32fa30" containerName="registry-server" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.498102 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="b70cf65d-ed30-49ae-b590-19a7e38dfae7" containerName="extract" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.498111 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="1c927899-39c7-4652-b0da-0af81b334878" containerName="registry-server" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.502077 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.504735 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.504855 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.506498 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.506642 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.506903 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.507050 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.507501 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.507578 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-clnx2\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.507704 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.518718 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557041 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557088 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557107 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557124 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557145 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557192 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557216 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557298 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557323 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557364 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557389 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557413 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/eebbd95e-bc5a-4c38-817e-06e8a132f328-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557430 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557450 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.557477 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.643094 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m"] Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.652082 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.654219 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m"] Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.656128 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.656326 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.656358 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-92n58\"" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658166 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658218 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658248 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658283 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658326 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658358 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658402 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658433 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658462 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658488 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658524 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658560 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/eebbd95e-bc5a-4c38-817e-06e8a132f328-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658602 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658627 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.658664 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.659153 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.659256 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.660591 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.660889 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.661237 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.664109 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.664371 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.664706 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.665663 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/eebbd95e-bc5a-4c38-817e-06e8a132f328-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.666568 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.669138 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.671864 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.672269 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.673864 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.674543 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/eebbd95e-bc5a-4c38-817e-06e8a132f328-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"eebbd95e-bc5a-4c38-817e-06e8a132f328\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.760019 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r78r\" (UniqueName: \"kubernetes.io/projected/00d732f0-587e-4958-83f0-edb327d23a97-kube-api-access-9r78r\") pod \"cert-manager-operator-controller-manager-64c74584c4-qmj5m\" (UID: \"00d732f0-587e-4958-83f0-edb327d23a97\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.760074 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/00d732f0-587e-4958-83f0-edb327d23a97-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-qmj5m\" (UID: \"00d732f0-587e-4958-83f0-edb327d23a97\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.818182 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.860993 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9r78r\" (UniqueName: \"kubernetes.io/projected/00d732f0-587e-4958-83f0-edb327d23a97-kube-api-access-9r78r\") pod \"cert-manager-operator-controller-manager-64c74584c4-qmj5m\" (UID: \"00d732f0-587e-4958-83f0-edb327d23a97\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.861047 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/00d732f0-587e-4958-83f0-edb327d23a97-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-qmj5m\" (UID: \"00d732f0-587e-4958-83f0-edb327d23a97\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.861546 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/00d732f0-587e-4958-83f0-edb327d23a97-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-qmj5m\" (UID: \"00d732f0-587e-4958-83f0-edb327d23a97\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m" Jan 07 10:02:42 crc kubenswrapper[5131]: I0107 10:02:42.886731 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r78r\" (UniqueName: \"kubernetes.io/projected/00d732f0-587e-4958-83f0-edb327d23a97-kube-api-access-9r78r\") pod \"cert-manager-operator-controller-manager-64c74584c4-qmj5m\" (UID: \"00d732f0-587e-4958-83f0-edb327d23a97\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m" Jan 07 10:02:43 crc kubenswrapper[5131]: I0107 10:02:43.012700 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m" Jan 07 10:02:43 crc kubenswrapper[5131]: I0107 10:02:43.390754 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 07 10:02:43 crc kubenswrapper[5131]: W0107 10:02:43.402590 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeebbd95e_bc5a_4c38_817e_06e8a132f328.slice/crio-b1df61b2496405e8bfe0b528e100a8d7c83739da6699dfee564a94407a294c1a WatchSource:0}: Error finding container b1df61b2496405e8bfe0b528e100a8d7c83739da6699dfee564a94407a294c1a: Status 404 returned error can't find the container with id b1df61b2496405e8bfe0b528e100a8d7c83739da6699dfee564a94407a294c1a Jan 07 10:02:43 crc kubenswrapper[5131]: I0107 10:02:43.476743 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m"] Jan 07 10:02:43 crc kubenswrapper[5131]: W0107 10:02:43.481234 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00d732f0_587e_4958_83f0_edb327d23a97.slice/crio-9a2704e13122af649c0325960be0ad2353e308a8ce3898953a0d5a54941b2e63 WatchSource:0}: Error finding container 9a2704e13122af649c0325960be0ad2353e308a8ce3898953a0d5a54941b2e63: Status 404 returned error can't find the container with id 9a2704e13122af649c0325960be0ad2353e308a8ce3898953a0d5a54941b2e63 Jan 07 10:02:43 crc kubenswrapper[5131]: I0107 10:02:43.885872 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m" event={"ID":"00d732f0-587e-4958-83f0-edb327d23a97","Type":"ContainerStarted","Data":"9a2704e13122af649c0325960be0ad2353e308a8ce3898953a0d5a54941b2e63"} Jan 07 10:02:43 crc kubenswrapper[5131]: I0107 10:02:43.887458 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"eebbd95e-bc5a-4c38-817e-06e8a132f328","Type":"ContainerStarted","Data":"b1df61b2496405e8bfe0b528e100a8d7c83739da6699dfee564a94407a294c1a"} Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.518745 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7n9vm"] Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.525058 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7n9vm"] Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.525217 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.603746 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-catalog-content\") pod \"community-operators-7n9vm\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.603803 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9hpv\" (UniqueName: \"kubernetes.io/projected/1d6485bf-d0bc-4180-ad81-1f14f6a14921-kube-api-access-d9hpv\") pod \"community-operators-7n9vm\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.603919 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-utilities\") pod \"community-operators-7n9vm\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.705044 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-catalog-content\") pod \"community-operators-7n9vm\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.705098 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9hpv\" (UniqueName: \"kubernetes.io/projected/1d6485bf-d0bc-4180-ad81-1f14f6a14921-kube-api-access-d9hpv\") pod \"community-operators-7n9vm\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.705179 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-utilities\") pod \"community-operators-7n9vm\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.705619 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-catalog-content\") pod \"community-operators-7n9vm\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.705667 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-utilities\") pod \"community-operators-7n9vm\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.739784 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9hpv\" (UniqueName: \"kubernetes.io/projected/1d6485bf-d0bc-4180-ad81-1f14f6a14921-kube-api-access-d9hpv\") pod \"community-operators-7n9vm\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.830353 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-prwfx" Jan 07 10:02:45 crc kubenswrapper[5131]: I0107 10:02:45.880309 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:02:50 crc kubenswrapper[5131]: I0107 10:02:50.662966 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:02:50 crc kubenswrapper[5131]: I0107 10:02:50.663530 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:03:05 crc kubenswrapper[5131]: I0107 10:03:05.492204 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7n9vm"] Jan 07 10:03:06 crc kubenswrapper[5131]: I0107 10:03:06.062588 5131 generic.go:358] "Generic (PLEG): container finished" podID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" containerID="43a7d1fa1eb8842ae7ac7ed7de4b343b0e05e77b2f8f68d56aaad4c75681835d" exitCode=0 Jan 07 10:03:06 crc kubenswrapper[5131]: I0107 10:03:06.062636 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n9vm" event={"ID":"1d6485bf-d0bc-4180-ad81-1f14f6a14921","Type":"ContainerDied","Data":"43a7d1fa1eb8842ae7ac7ed7de4b343b0e05e77b2f8f68d56aaad4c75681835d"} Jan 07 10:03:06 crc kubenswrapper[5131]: I0107 10:03:06.063138 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n9vm" event={"ID":"1d6485bf-d0bc-4180-ad81-1f14f6a14921","Type":"ContainerStarted","Data":"73e17d73e60138845695baa6ad6d4c6b9ac9d08664565d1e7a72a05d06128562"} Jan 07 10:03:06 crc kubenswrapper[5131]: I0107 10:03:06.065981 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m" event={"ID":"00d732f0-587e-4958-83f0-edb327d23a97","Type":"ContainerStarted","Data":"eb3009e837c2231eae07d1dc6bdcadcec4a8cee61a742521ab58ed4f52f04bc4"} Jan 07 10:03:06 crc kubenswrapper[5131]: I0107 10:03:06.068318 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"eebbd95e-bc5a-4c38-817e-06e8a132f328","Type":"ContainerStarted","Data":"9eedc8df1305d4ea24bcdb126b05d1b345d0dbef538f5eaeb75b51013ab93cf9"} Jan 07 10:03:06 crc kubenswrapper[5131]: I0107 10:03:06.147489 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-qmj5m" podStartSLOduration=2.517709642 podStartE2EDuration="24.147465228s" podCreationTimestamp="2026-01-07 10:02:42 +0000 UTC" firstStartedPulling="2026-01-07 10:02:43.483616757 +0000 UTC m=+791.649918321" lastFinishedPulling="2026-01-07 10:03:05.113372353 +0000 UTC m=+813.279673907" observedRunningTime="2026-01-07 10:03:06.138206803 +0000 UTC m=+814.304508387" watchObservedRunningTime="2026-01-07 10:03:06.147465228 +0000 UTC m=+814.313766812" Jan 07 10:03:06 crc kubenswrapper[5131]: I0107 10:03:06.288254 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 07 10:03:06 crc kubenswrapper[5131]: I0107 10:03:06.319350 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 07 10:03:07 crc kubenswrapper[5131]: I0107 10:03:07.083760 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n9vm" event={"ID":"1d6485bf-d0bc-4180-ad81-1f14f6a14921","Type":"ContainerStarted","Data":"121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f"} Jan 07 10:03:08 crc kubenswrapper[5131]: I0107 10:03:08.090322 5131 generic.go:358] "Generic (PLEG): container finished" podID="eebbd95e-bc5a-4c38-817e-06e8a132f328" containerID="9eedc8df1305d4ea24bcdb126b05d1b345d0dbef538f5eaeb75b51013ab93cf9" exitCode=0 Jan 07 10:03:08 crc kubenswrapper[5131]: I0107 10:03:08.090417 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"eebbd95e-bc5a-4c38-817e-06e8a132f328","Type":"ContainerDied","Data":"9eedc8df1305d4ea24bcdb126b05d1b345d0dbef538f5eaeb75b51013ab93cf9"} Jan 07 10:03:08 crc kubenswrapper[5131]: I0107 10:03:08.092215 5131 generic.go:358] "Generic (PLEG): container finished" podID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" containerID="121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f" exitCode=0 Jan 07 10:03:08 crc kubenswrapper[5131]: I0107 10:03:08.092311 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n9vm" event={"ID":"1d6485bf-d0bc-4180-ad81-1f14f6a14921","Type":"ContainerDied","Data":"121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f"} Jan 07 10:03:09 crc kubenswrapper[5131]: I0107 10:03:09.100232 5131 generic.go:358] "Generic (PLEG): container finished" podID="eebbd95e-bc5a-4c38-817e-06e8a132f328" containerID="37b96018ea94b18c453a6af571acd8acac37c2c0851c43a4a79123f934d5e8ac" exitCode=0 Jan 07 10:03:09 crc kubenswrapper[5131]: I0107 10:03:09.100350 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"eebbd95e-bc5a-4c38-817e-06e8a132f328","Type":"ContainerDied","Data":"37b96018ea94b18c453a6af571acd8acac37c2c0851c43a4a79123f934d5e8ac"} Jan 07 10:03:09 crc kubenswrapper[5131]: I0107 10:03:09.102656 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n9vm" event={"ID":"1d6485bf-d0bc-4180-ad81-1f14f6a14921","Type":"ContainerStarted","Data":"b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d"} Jan 07 10:03:09 crc kubenswrapper[5131]: I0107 10:03:09.165807 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7n9vm" podStartSLOduration=23.289687507 podStartE2EDuration="24.165778554s" podCreationTimestamp="2026-01-07 10:02:45 +0000 UTC" firstStartedPulling="2026-01-07 10:03:06.063374935 +0000 UTC m=+814.229676499" lastFinishedPulling="2026-01-07 10:03:06.939465982 +0000 UTC m=+815.105767546" observedRunningTime="2026-01-07 10:03:09.154626003 +0000 UTC m=+817.320927577" watchObservedRunningTime="2026-01-07 10:03:09.165778554 +0000 UTC m=+817.332080158" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.110612 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"eebbd95e-bc5a-4c38-817e-06e8a132f328","Type":"ContainerStarted","Data":"3e84651c157071df14c4edaaa74a37c7ccd3e3b070f8bc088705e3bf5365ab3d"} Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.110729 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.164477 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=5.98717465 podStartE2EDuration="28.164455779s" podCreationTimestamp="2026-01-07 10:02:42 +0000 UTC" firstStartedPulling="2026-01-07 10:02:43.404963106 +0000 UTC m=+791.571264670" lastFinishedPulling="2026-01-07 10:03:05.582244235 +0000 UTC m=+813.748545799" observedRunningTime="2026-01-07 10:03:10.143384767 +0000 UTC m=+818.309686391" watchObservedRunningTime="2026-01-07 10:03:10.164455779 +0000 UTC m=+818.330757343" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.385262 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn"] Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.390336 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.392492 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-vfxc9\"" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.392691 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.392821 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.395147 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn"] Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.437731 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpxjl\" (UniqueName: \"kubernetes.io/projected/874e81d2-06c0-4aad-aa39-701198f0be4d-kube-api-access-tpxjl\") pod \"cert-manager-webhook-7894b5b9b4-pnzxn\" (UID: \"874e81d2-06c0-4aad-aa39-701198f0be4d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.437772 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/874e81d2-06c0-4aad-aa39-701198f0be4d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-pnzxn\" (UID: \"874e81d2-06c0-4aad-aa39-701198f0be4d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.538715 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tpxjl\" (UniqueName: \"kubernetes.io/projected/874e81d2-06c0-4aad-aa39-701198f0be4d-kube-api-access-tpxjl\") pod \"cert-manager-webhook-7894b5b9b4-pnzxn\" (UID: \"874e81d2-06c0-4aad-aa39-701198f0be4d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.538777 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/874e81d2-06c0-4aad-aa39-701198f0be4d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-pnzxn\" (UID: \"874e81d2-06c0-4aad-aa39-701198f0be4d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.557257 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpxjl\" (UniqueName: \"kubernetes.io/projected/874e81d2-06c0-4aad-aa39-701198f0be4d-kube-api-access-tpxjl\") pod \"cert-manager-webhook-7894b5b9b4-pnzxn\" (UID: \"874e81d2-06c0-4aad-aa39-701198f0be4d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.557600 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/874e81d2-06c0-4aad-aa39-701198f0be4d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-pnzxn\" (UID: \"874e81d2-06c0-4aad-aa39-701198f0be4d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.704298 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" Jan 07 10:03:10 crc kubenswrapper[5131]: I0107 10:03:10.972601 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn"] Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.117388 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" event={"ID":"874e81d2-06c0-4aad-aa39-701198f0be4d","Type":"ContainerStarted","Data":"91673a1ea51779c154811a4fee184b39c18078d00d0d2ff3199b85a040c2b7d5"} Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.249260 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn"] Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.254766 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn" Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.257981 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-xqj4q\"" Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.262935 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn"] Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.351323 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xstj4\" (UniqueName: \"kubernetes.io/projected/cab27a2d-a22b-44d7-83c1-f57c6d2bad11-kube-api-access-xstj4\") pod \"cert-manager-cainjector-7dbf76d5c8-fj7bn\" (UID: \"cab27a2d-a22b-44d7-83c1-f57c6d2bad11\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn" Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.351387 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cab27a2d-a22b-44d7-83c1-f57c6d2bad11-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-fj7bn\" (UID: \"cab27a2d-a22b-44d7-83c1-f57c6d2bad11\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn" Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.453181 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xstj4\" (UniqueName: \"kubernetes.io/projected/cab27a2d-a22b-44d7-83c1-f57c6d2bad11-kube-api-access-xstj4\") pod \"cert-manager-cainjector-7dbf76d5c8-fj7bn\" (UID: \"cab27a2d-a22b-44d7-83c1-f57c6d2bad11\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn" Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.453361 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cab27a2d-a22b-44d7-83c1-f57c6d2bad11-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-fj7bn\" (UID: \"cab27a2d-a22b-44d7-83c1-f57c6d2bad11\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn" Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.482706 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cab27a2d-a22b-44d7-83c1-f57c6d2bad11-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-fj7bn\" (UID: \"cab27a2d-a22b-44d7-83c1-f57c6d2bad11\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn" Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.482988 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xstj4\" (UniqueName: \"kubernetes.io/projected/cab27a2d-a22b-44d7-83c1-f57c6d2bad11-kube-api-access-xstj4\") pod \"cert-manager-cainjector-7dbf76d5c8-fj7bn\" (UID: \"cab27a2d-a22b-44d7-83c1-f57c6d2bad11\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn" Jan 07 10:03:11 crc kubenswrapper[5131]: I0107 10:03:11.569950 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn" Jan 07 10:03:12 crc kubenswrapper[5131]: I0107 10:03:12.000573 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn"] Jan 07 10:03:12 crc kubenswrapper[5131]: I0107 10:03:12.132524 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn" event={"ID":"cab27a2d-a22b-44d7-83c1-f57c6d2bad11","Type":"ContainerStarted","Data":"e8981a61f90e6c379077f13e7240cbd4c240e92d0b44771da82ad497f8b59f97"} Jan 07 10:03:15 crc kubenswrapper[5131]: I0107 10:03:15.881224 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:03:15 crc kubenswrapper[5131]: I0107 10:03:15.881867 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:03:15 crc kubenswrapper[5131]: I0107 10:03:15.924530 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.219088 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.584583 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.592281 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.594245 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.594565 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.594726 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.598375 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.599973 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728579 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728618 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728636 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728654 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25dq\" (UniqueName: \"kubernetes.io/projected/c7812cfa-5291-4815-83a6-7bee81e13321-kube-api-access-k25dq\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728685 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728703 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728744 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728762 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728785 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728804 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728825 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.728873 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.830529 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.830872 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.831034 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.831055 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.831074 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.831239 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.831460 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k25dq\" (UniqueName: \"kubernetes.io/projected/c7812cfa-5291-4815-83a6-7bee81e13321-kube-api-access-k25dq\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.831643 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.831705 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.831963 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.832085 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.832246 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.832373 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.832410 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.832459 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.832665 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.833212 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.833998 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.834169 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.834212 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.834276 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.837098 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.848500 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.851186 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k25dq\" (UniqueName: \"kubernetes.io/projected/c7812cfa-5291-4815-83a6-7bee81e13321-kube-api-access-k25dq\") pod \"service-telemetry-operator-1-build\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:16 crc kubenswrapper[5131]: I0107 10:03:16.912768 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:17 crc kubenswrapper[5131]: I0107 10:03:17.104160 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7n9vm"] Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.188743 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7n9vm" podUID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" containerName="registry-server" containerID="cri-o://b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d" gracePeriod=2 Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.257388 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 07 10:03:18 crc kubenswrapper[5131]: W0107 10:03:18.306763 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7812cfa_5291_4815_83a6_7bee81e13321.slice/crio-218dfcb258e6c26afacd516fafc925221b35814bcb8fd46a88e378d62d182133 WatchSource:0}: Error finding container 218dfcb258e6c26afacd516fafc925221b35814bcb8fd46a88e378d62d182133: Status 404 returned error can't find the container with id 218dfcb258e6c26afacd516fafc925221b35814bcb8fd46a88e378d62d182133 Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.517490 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.658475 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9hpv\" (UniqueName: \"kubernetes.io/projected/1d6485bf-d0bc-4180-ad81-1f14f6a14921-kube-api-access-d9hpv\") pod \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.658654 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-utilities\") pod \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.658717 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-catalog-content\") pod \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\" (UID: \"1d6485bf-d0bc-4180-ad81-1f14f6a14921\") " Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.659971 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-utilities" (OuterVolumeSpecName: "utilities") pod "1d6485bf-d0bc-4180-ad81-1f14f6a14921" (UID: "1d6485bf-d0bc-4180-ad81-1f14f6a14921"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.664698 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d6485bf-d0bc-4180-ad81-1f14f6a14921-kube-api-access-d9hpv" (OuterVolumeSpecName: "kube-api-access-d9hpv") pod "1d6485bf-d0bc-4180-ad81-1f14f6a14921" (UID: "1d6485bf-d0bc-4180-ad81-1f14f6a14921"). InnerVolumeSpecName "kube-api-access-d9hpv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.714015 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d6485bf-d0bc-4180-ad81-1f14f6a14921" (UID: "1d6485bf-d0bc-4180-ad81-1f14f6a14921"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.760368 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.760405 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d9hpv\" (UniqueName: \"kubernetes.io/projected/1d6485bf-d0bc-4180-ad81-1f14f6a14921-kube-api-access-d9hpv\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:18 crc kubenswrapper[5131]: I0107 10:03:18.760437 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d6485bf-d0bc-4180-ad81-1f14f6a14921-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.206665 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn" event={"ID":"cab27a2d-a22b-44d7-83c1-f57c6d2bad11","Type":"ContainerStarted","Data":"8c9045a9851f097c3104661db3965cb3d07dd09c81d486faa3be177bb20eff24"} Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.209264 5131 generic.go:358] "Generic (PLEG): container finished" podID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" containerID="b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d" exitCode=0 Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.209350 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n9vm" event={"ID":"1d6485bf-d0bc-4180-ad81-1f14f6a14921","Type":"ContainerDied","Data":"b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d"} Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.209398 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n9vm" event={"ID":"1d6485bf-d0bc-4180-ad81-1f14f6a14921","Type":"ContainerDied","Data":"73e17d73e60138845695baa6ad6d4c6b9ac9d08664565d1e7a72a05d06128562"} Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.209414 5131 scope.go:117] "RemoveContainer" containerID="b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.209448 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n9vm" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.213223 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" event={"ID":"874e81d2-06c0-4aad-aa39-701198f0be4d","Type":"ContainerStarted","Data":"d18ac523f197fe8608983f36d786762bc343c086db1bfefcebd96b63ebbdff34"} Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.213312 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.216405 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c7812cfa-5291-4815-83a6-7bee81e13321","Type":"ContainerStarted","Data":"218dfcb258e6c26afacd516fafc925221b35814bcb8fd46a88e378d62d182133"} Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.233362 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-fj7bn" podStartSLOduration=2.173021308 podStartE2EDuration="8.233337825s" podCreationTimestamp="2026-01-07 10:03:11 +0000 UTC" firstStartedPulling="2026-01-07 10:03:12.018785073 +0000 UTC m=+820.185086647" lastFinishedPulling="2026-01-07 10:03:18.0791016 +0000 UTC m=+826.245403164" observedRunningTime="2026-01-07 10:03:19.224291985 +0000 UTC m=+827.390593549" watchObservedRunningTime="2026-01-07 10:03:19.233337825 +0000 UTC m=+827.399639399" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.242981 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" podStartSLOduration=2.119799428 podStartE2EDuration="9.242965129s" podCreationTimestamp="2026-01-07 10:03:10 +0000 UTC" firstStartedPulling="2026-01-07 10:03:10.986615434 +0000 UTC m=+819.152916998" lastFinishedPulling="2026-01-07 10:03:18.109781105 +0000 UTC m=+826.276082699" observedRunningTime="2026-01-07 10:03:19.242371274 +0000 UTC m=+827.408672858" watchObservedRunningTime="2026-01-07 10:03:19.242965129 +0000 UTC m=+827.409266703" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.248673 5131 scope.go:117] "RemoveContainer" containerID="121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.279251 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7n9vm"] Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.281382 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7n9vm"] Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.296091 5131 scope.go:117] "RemoveContainer" containerID="43a7d1fa1eb8842ae7ac7ed7de4b343b0e05e77b2f8f68d56aaad4c75681835d" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.312974 5131 scope.go:117] "RemoveContainer" containerID="b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d" Jan 07 10:03:19 crc kubenswrapper[5131]: E0107 10:03:19.313351 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d\": container with ID starting with b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d not found: ID does not exist" containerID="b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.313403 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d"} err="failed to get container status \"b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d\": rpc error: code = NotFound desc = could not find container \"b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d\": container with ID starting with b91e20b922e0eb18278506eddc0f0c37d92b38200d28642651ccab3be4a5c66d not found: ID does not exist" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.313441 5131 scope.go:117] "RemoveContainer" containerID="121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f" Jan 07 10:03:19 crc kubenswrapper[5131]: E0107 10:03:19.313705 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f\": container with ID starting with 121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f not found: ID does not exist" containerID="121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.313725 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f"} err="failed to get container status \"121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f\": rpc error: code = NotFound desc = could not find container \"121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f\": container with ID starting with 121d4961c3c59643e214f9ae4241f08bbcde043f70157d9003ca285d9d3b7b9f not found: ID does not exist" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.313754 5131 scope.go:117] "RemoveContainer" containerID="43a7d1fa1eb8842ae7ac7ed7de4b343b0e05e77b2f8f68d56aaad4c75681835d" Jan 07 10:03:19 crc kubenswrapper[5131]: E0107 10:03:19.314120 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a7d1fa1eb8842ae7ac7ed7de4b343b0e05e77b2f8f68d56aaad4c75681835d\": container with ID starting with 43a7d1fa1eb8842ae7ac7ed7de4b343b0e05e77b2f8f68d56aaad4c75681835d not found: ID does not exist" containerID="43a7d1fa1eb8842ae7ac7ed7de4b343b0e05e77b2f8f68d56aaad4c75681835d" Jan 07 10:03:19 crc kubenswrapper[5131]: I0107 10:03:19.314141 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a7d1fa1eb8842ae7ac7ed7de4b343b0e05e77b2f8f68d56aaad4c75681835d"} err="failed to get container status \"43a7d1fa1eb8842ae7ac7ed7de4b343b0e05e77b2f8f68d56aaad4c75681835d\": rpc error: code = NotFound desc = could not find container \"43a7d1fa1eb8842ae7ac7ed7de4b343b0e05e77b2f8f68d56aaad4c75681835d\": container with ID starting with 43a7d1fa1eb8842ae7ac7ed7de4b343b0e05e77b2f8f68d56aaad4c75681835d not found: ID does not exist" Jan 07 10:03:20 crc kubenswrapper[5131]: I0107 10:03:20.187893 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" path="/var/lib/kubelet/pods/1d6485bf-d0bc-4180-ad81-1f14f6a14921/volumes" Jan 07 10:03:20 crc kubenswrapper[5131]: I0107 10:03:20.663621 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:03:20 crc kubenswrapper[5131]: I0107 10:03:20.663702 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:03:21 crc kubenswrapper[5131]: I0107 10:03:21.200174 5131 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="eebbd95e-bc5a-4c38-817e-06e8a132f328" containerName="elasticsearch" probeResult="failure" output=< Jan 07 10:03:21 crc kubenswrapper[5131]: {"timestamp": "2026-01-07T10:03:21+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 07 10:03:21 crc kubenswrapper[5131]: > Jan 07 10:03:25 crc kubenswrapper[5131]: I0107 10:03:25.230371 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-pnzxn" Jan 07 10:03:26 crc kubenswrapper[5131]: I0107 10:03:26.212702 5131 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="eebbd95e-bc5a-4c38-817e-06e8a132f328" containerName="elasticsearch" probeResult="failure" output=< Jan 07 10:03:26 crc kubenswrapper[5131]: {"timestamp": "2026-01-07T10:03:26+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 07 10:03:26 crc kubenswrapper[5131]: > Jan 07 10:03:27 crc kubenswrapper[5131]: I0107 10:03:27.245561 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.381166 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-ztkw9"] Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.382050 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" containerName="extract-content" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.382062 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" containerName="extract-content" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.382075 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" containerName="registry-server" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.382080 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" containerName="registry-server" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.382089 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" containerName="extract-utilities" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.382094 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" containerName="extract-utilities" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.382190 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="1d6485bf-d0bc-4180-ad81-1f14f6a14921" containerName="registry-server" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.388894 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-ztkw9" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.390971 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-ztkw9"] Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.402744 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-c5lfx\"" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.496343 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/338352b1-4821-4bab-929b-c47e7583474b-bound-sa-token\") pod \"cert-manager-858d87f86b-ztkw9\" (UID: \"338352b1-4821-4bab-929b-c47e7583474b\") " pod="cert-manager/cert-manager-858d87f86b-ztkw9" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.496479 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdv86\" (UniqueName: \"kubernetes.io/projected/338352b1-4821-4bab-929b-c47e7583474b-kube-api-access-kdv86\") pod \"cert-manager-858d87f86b-ztkw9\" (UID: \"338352b1-4821-4bab-929b-c47e7583474b\") " pod="cert-manager/cert-manager-858d87f86b-ztkw9" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.597775 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kdv86\" (UniqueName: \"kubernetes.io/projected/338352b1-4821-4bab-929b-c47e7583474b-kube-api-access-kdv86\") pod \"cert-manager-858d87f86b-ztkw9\" (UID: \"338352b1-4821-4bab-929b-c47e7583474b\") " pod="cert-manager/cert-manager-858d87f86b-ztkw9" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.597899 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/338352b1-4821-4bab-929b-c47e7583474b-bound-sa-token\") pod \"cert-manager-858d87f86b-ztkw9\" (UID: \"338352b1-4821-4bab-929b-c47e7583474b\") " pod="cert-manager/cert-manager-858d87f86b-ztkw9" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.617248 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/338352b1-4821-4bab-929b-c47e7583474b-bound-sa-token\") pod \"cert-manager-858d87f86b-ztkw9\" (UID: \"338352b1-4821-4bab-929b-c47e7583474b\") " pod="cert-manager/cert-manager-858d87f86b-ztkw9" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.638020 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdv86\" (UniqueName: \"kubernetes.io/projected/338352b1-4821-4bab-929b-c47e7583474b-kube-api-access-kdv86\") pod \"cert-manager-858d87f86b-ztkw9\" (UID: \"338352b1-4821-4bab-929b-c47e7583474b\") " pod="cert-manager/cert-manager-858d87f86b-ztkw9" Jan 07 10:03:28 crc kubenswrapper[5131]: I0107 10:03:28.712486 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-ztkw9" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.249704 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.263404 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.268277 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.316260 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.316269 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.316502 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.416231 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.416638 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.416707 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.416746 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnpmn\" (UniqueName: \"kubernetes.io/projected/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-kube-api-access-lnpmn\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.416782 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.416822 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.416916 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.416951 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.417063 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.417100 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.417164 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.417209 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.518787 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.518825 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lnpmn\" (UniqueName: \"kubernetes.io/projected/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-kube-api-access-lnpmn\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.518858 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.518882 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.518903 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.518928 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.519003 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.519030 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.519101 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.519133 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.519157 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.519183 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.519340 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.519622 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.520365 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.520592 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.520761 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.521209 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.521388 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.521709 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.522087 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.524034 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.525681 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.535637 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnpmn\" (UniqueName: \"kubernetes.io/projected/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-kube-api-access-lnpmn\") pod \"service-telemetry-operator-2-build\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:29 crc kubenswrapper[5131]: I0107 10:03:29.628970 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:03:31 crc kubenswrapper[5131]: I0107 10:03:31.236392 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 07 10:03:31 crc kubenswrapper[5131]: I0107 10:03:31.291518 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-ztkw9"] Jan 07 10:03:31 crc kubenswrapper[5131]: W0107 10:03:31.299922 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod338352b1_4821_4bab_929b_c47e7583474b.slice/crio-134529d07a164aebdf43ded6647502d28ffd7eee73d4245f7f9670c852f8efd4 WatchSource:0}: Error finding container 134529d07a164aebdf43ded6647502d28ffd7eee73d4245f7f9670c852f8efd4: Status 404 returned error can't find the container with id 134529d07a164aebdf43ded6647502d28ffd7eee73d4245f7f9670c852f8efd4 Jan 07 10:03:31 crc kubenswrapper[5131]: I0107 10:03:31.337397 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-ztkw9" event={"ID":"338352b1-4821-4bab-929b-c47e7583474b","Type":"ContainerStarted","Data":"134529d07a164aebdf43ded6647502d28ffd7eee73d4245f7f9670c852f8efd4"} Jan 07 10:03:31 crc kubenswrapper[5131]: I0107 10:03:31.338690 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6bdce6a8-6c59-4eb5-9724-cacf299dcd90","Type":"ContainerStarted","Data":"cf80d7661b6fb478f5337015bdcee513599f6a7fa69e0dd3e0ffc9ce96dc6ea9"} Jan 07 10:03:31 crc kubenswrapper[5131]: I0107 10:03:31.689231 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 07 10:03:32 crc kubenswrapper[5131]: I0107 10:03:32.345754 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-ztkw9" event={"ID":"338352b1-4821-4bab-929b-c47e7583474b","Type":"ContainerStarted","Data":"48b08f379cb6677a36ca64159b920c61a9d783b6f3683334404937c66c32368b"} Jan 07 10:03:32 crc kubenswrapper[5131]: I0107 10:03:32.381995 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-ztkw9" podStartSLOduration=4.381974825 podStartE2EDuration="4.381974825s" podCreationTimestamp="2026-01-07 10:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:03:32.376522183 +0000 UTC m=+840.542823757" watchObservedRunningTime="2026-01-07 10:03:32.381974825 +0000 UTC m=+840.548276419" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.358867 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6bdce6a8-6c59-4eb5-9724-cacf299dcd90","Type":"ContainerStarted","Data":"c89251fc5d5d15077b61991de7fd8333074d4f508c6fec5c5b852d091cab2f33"} Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.360673 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c7812cfa-5291-4815-83a6-7bee81e13321","Type":"ContainerStarted","Data":"b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482"} Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.360686 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="c7812cfa-5291-4815-83a6-7bee81e13321" containerName="manage-dockerfile" containerID="cri-o://b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482" gracePeriod=30 Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.765474 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_c7812cfa-5291-4815-83a6-7bee81e13321/manage-dockerfile/0.log" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.765593 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.801918 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-pull\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.801967 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-node-pullsecrets\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.802007 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-root\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.802064 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k25dq\" (UniqueName: \"kubernetes.io/projected/c7812cfa-5291-4815-83a6-7bee81e13321-kube-api-access-k25dq\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.802095 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-build-blob-cache\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.802092 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.802120 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-push\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.802491 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.802491 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.802849 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.808912 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7812cfa-5291-4815-83a6-7bee81e13321-kube-api-access-k25dq" (OuterVolumeSpecName: "kube-api-access-k25dq") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "kube-api-access-k25dq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.808992 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.810107 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903228 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-buildworkdir\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903291 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-ca-bundles\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903322 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-buildcachedir\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903380 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-run\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903449 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-proxy-ca-bundles\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903503 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-system-configs\") pod \"c7812cfa-5291-4815-83a6-7bee81e13321\" (UID: \"c7812cfa-5291-4815-83a6-7bee81e13321\") " Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903873 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903905 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903922 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k25dq\" (UniqueName: \"kubernetes.io/projected/c7812cfa-5291-4815-83a6-7bee81e13321-kube-api-access-k25dq\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903938 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.904044 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/c7812cfa-5291-4815-83a6-7bee81e13321-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903871 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.903988 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.904018 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.904250 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.904303 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:03:34 crc kubenswrapper[5131]: I0107 10:03:34.904583 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c7812cfa-5291-4815-83a6-7bee81e13321" (UID: "c7812cfa-5291-4815-83a6-7bee81e13321"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.005158 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.005193 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.005206 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.005219 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c7812cfa-5291-4815-83a6-7bee81e13321-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.005232 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c7812cfa-5291-4815-83a6-7bee81e13321-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.005243 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7812cfa-5291-4815-83a6-7bee81e13321-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.370594 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_c7812cfa-5291-4815-83a6-7bee81e13321/manage-dockerfile/0.log" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.370671 5131 generic.go:358] "Generic (PLEG): container finished" podID="c7812cfa-5291-4815-83a6-7bee81e13321" containerID="b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482" exitCode=1 Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.370848 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.370989 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c7812cfa-5291-4815-83a6-7bee81e13321","Type":"ContainerDied","Data":"b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482"} Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.371054 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c7812cfa-5291-4815-83a6-7bee81e13321","Type":"ContainerDied","Data":"218dfcb258e6c26afacd516fafc925221b35814bcb8fd46a88e378d62d182133"} Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.371087 5131 scope.go:117] "RemoveContainer" containerID="b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.392468 5131 scope.go:117] "RemoveContainer" containerID="b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482" Jan 07 10:03:35 crc kubenswrapper[5131]: E0107 10:03:35.393052 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482\": container with ID starting with b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482 not found: ID does not exist" containerID="b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.393101 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482"} err="failed to get container status \"b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482\": rpc error: code = NotFound desc = could not find container \"b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482\": container with ID starting with b8da8ed6466f2c7c758d494bbac2f11b3b58eab5015ef69ad2c3f82ccf473482 not found: ID does not exist" Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.417956 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 07 10:03:35 crc kubenswrapper[5131]: I0107 10:03:35.427499 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 07 10:03:36 crc kubenswrapper[5131]: I0107 10:03:36.188392 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7812cfa-5291-4815-83a6-7bee81e13321" path="/var/lib/kubelet/pods/c7812cfa-5291-4815-83a6-7bee81e13321/volumes" Jan 07 10:03:45 crc kubenswrapper[5131]: I0107 10:03:45.447755 5131 generic.go:358] "Generic (PLEG): container finished" podID="6bdce6a8-6c59-4eb5-9724-cacf299dcd90" containerID="c89251fc5d5d15077b61991de7fd8333074d4f508c6fec5c5b852d091cab2f33" exitCode=0 Jan 07 10:03:45 crc kubenswrapper[5131]: I0107 10:03:45.447868 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6bdce6a8-6c59-4eb5-9724-cacf299dcd90","Type":"ContainerDied","Data":"c89251fc5d5d15077b61991de7fd8333074d4f508c6fec5c5b852d091cab2f33"} Jan 07 10:03:46 crc kubenswrapper[5131]: I0107 10:03:46.466081 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6bdce6a8-6c59-4eb5-9724-cacf299dcd90","Type":"ContainerStarted","Data":"6d29593d210b06200c639e14c19571f7fbd26653382f1dc2e23c26e308de245c"} Jan 07 10:03:47 crc kubenswrapper[5131]: I0107 10:03:47.476579 5131 generic.go:358] "Generic (PLEG): container finished" podID="6bdce6a8-6c59-4eb5-9724-cacf299dcd90" containerID="6d29593d210b06200c639e14c19571f7fbd26653382f1dc2e23c26e308de245c" exitCode=0 Jan 07 10:03:47 crc kubenswrapper[5131]: I0107 10:03:47.476684 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6bdce6a8-6c59-4eb5-9724-cacf299dcd90","Type":"ContainerDied","Data":"6d29593d210b06200c639e14c19571f7fbd26653382f1dc2e23c26e308de245c"} Jan 07 10:03:47 crc kubenswrapper[5131]: I0107 10:03:47.515021 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_6bdce6a8-6c59-4eb5-9724-cacf299dcd90/manage-dockerfile/0.log" Jan 07 10:03:50 crc kubenswrapper[5131]: I0107 10:03:50.613411 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6bdce6a8-6c59-4eb5-9724-cacf299dcd90","Type":"ContainerStarted","Data":"ad85792d83882832142cdbd89bb04ca3d411e4f3e876f422d934fd6db9d52bb1"} Jan 07 10:03:50 crc kubenswrapper[5131]: I0107 10:03:50.663679 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:03:50 crc kubenswrapper[5131]: I0107 10:03:50.663771 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:03:50 crc kubenswrapper[5131]: I0107 10:03:50.663885 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 10:03:50 crc kubenswrapper[5131]: I0107 10:03:50.664815 5131 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b5fd7c41683ca17dd95a35646c53ce725c855bc5bff2a2030ae596afb470eb2"} pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 10:03:50 crc kubenswrapper[5131]: I0107 10:03:50.664992 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" containerID="cri-o://7b5fd7c41683ca17dd95a35646c53ce725c855bc5bff2a2030ae596afb470eb2" gracePeriod=600 Jan 07 10:03:51 crc kubenswrapper[5131]: I0107 10:03:51.625500 5131 generic.go:358] "Generic (PLEG): container finished" podID="3942e752-44ba-4678-8723-6cd778e60d73" containerID="7b5fd7c41683ca17dd95a35646c53ce725c855bc5bff2a2030ae596afb470eb2" exitCode=0 Jan 07 10:03:51 crc kubenswrapper[5131]: I0107 10:03:51.625593 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerDied","Data":"7b5fd7c41683ca17dd95a35646c53ce725c855bc5bff2a2030ae596afb470eb2"} Jan 07 10:03:51 crc kubenswrapper[5131]: I0107 10:03:51.626384 5131 scope.go:117] "RemoveContainer" containerID="95b2f2f38ab6b9d142bf531750364a1f6ffccfcd46ca5680da77d1d639a07cbc" Jan 07 10:03:51 crc kubenswrapper[5131]: I0107 10:03:51.672376 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=20.326106452 podStartE2EDuration="22.672359759s" podCreationTimestamp="2026-01-07 10:03:29 +0000 UTC" firstStartedPulling="2026-01-07 10:03:31.24703462 +0000 UTC m=+839.413336204" lastFinishedPulling="2026-01-07 10:03:33.593287937 +0000 UTC m=+841.759589511" observedRunningTime="2026-01-07 10:03:51.667592014 +0000 UTC m=+859.833893618" watchObservedRunningTime="2026-01-07 10:03:51.672359759 +0000 UTC m=+859.838661333" Jan 07 10:03:52 crc kubenswrapper[5131]: I0107 10:03:52.635879 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"763e1eb5460745b4cb7278fb5c4fbd6802295fde5df336a494e758ddf511ec87"} Jan 07 10:04:00 crc kubenswrapper[5131]: I0107 10:04:00.151041 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463004-qdmtn"] Jan 07 10:04:00 crc kubenswrapper[5131]: I0107 10:04:00.152325 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7812cfa-5291-4815-83a6-7bee81e13321" containerName="manage-dockerfile" Jan 07 10:04:00 crc kubenswrapper[5131]: I0107 10:04:00.152341 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7812cfa-5291-4815-83a6-7bee81e13321" containerName="manage-dockerfile" Jan 07 10:04:00 crc kubenswrapper[5131]: I0107 10:04:00.152495 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7812cfa-5291-4815-83a6-7bee81e13321" containerName="manage-dockerfile" Jan 07 10:04:06 crc kubenswrapper[5131]: I0107 10:04:06.715391 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463004-qdmtn"] Jan 07 10:04:06 crc kubenswrapper[5131]: I0107 10:04:06.715706 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463004-qdmtn" Jan 07 10:04:06 crc kubenswrapper[5131]: I0107 10:04:06.719205 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:04:06 crc kubenswrapper[5131]: I0107 10:04:06.720764 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:04:06 crc kubenswrapper[5131]: I0107 10:04:06.726268 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:04:06 crc kubenswrapper[5131]: I0107 10:04:06.860902 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rsq6\" (UniqueName: \"kubernetes.io/projected/5556d222-d67e-4aea-b62c-864c0ea52ad2-kube-api-access-8rsq6\") pod \"auto-csr-approver-29463004-qdmtn\" (UID: \"5556d222-d67e-4aea-b62c-864c0ea52ad2\") " pod="openshift-infra/auto-csr-approver-29463004-qdmtn" Jan 07 10:04:06 crc kubenswrapper[5131]: I0107 10:04:06.962917 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8rsq6\" (UniqueName: \"kubernetes.io/projected/5556d222-d67e-4aea-b62c-864c0ea52ad2-kube-api-access-8rsq6\") pod \"auto-csr-approver-29463004-qdmtn\" (UID: \"5556d222-d67e-4aea-b62c-864c0ea52ad2\") " pod="openshift-infra/auto-csr-approver-29463004-qdmtn" Jan 07 10:04:06 crc kubenswrapper[5131]: I0107 10:04:06.993891 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rsq6\" (UniqueName: \"kubernetes.io/projected/5556d222-d67e-4aea-b62c-864c0ea52ad2-kube-api-access-8rsq6\") pod \"auto-csr-approver-29463004-qdmtn\" (UID: \"5556d222-d67e-4aea-b62c-864c0ea52ad2\") " pod="openshift-infra/auto-csr-approver-29463004-qdmtn" Jan 07 10:04:07 crc kubenswrapper[5131]: I0107 10:04:07.048483 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463004-qdmtn" Jan 07 10:04:07 crc kubenswrapper[5131]: I0107 10:04:07.499377 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463004-qdmtn"] Jan 07 10:04:07 crc kubenswrapper[5131]: I0107 10:04:07.749153 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463004-qdmtn" event={"ID":"5556d222-d67e-4aea-b62c-864c0ea52ad2","Type":"ContainerStarted","Data":"32e4e7e5b2d73c0a2969e33a69cb7a6c362052d267398b3dedd0deeeaadff8ed"} Jan 07 10:04:09 crc kubenswrapper[5131]: I0107 10:04:09.774925 5131 generic.go:358] "Generic (PLEG): container finished" podID="5556d222-d67e-4aea-b62c-864c0ea52ad2" containerID="aa0e2cfcae903df8e64949f9774bb3afe65f49cc9468bc7429f56de52dfb88d3" exitCode=0 Jan 07 10:04:09 crc kubenswrapper[5131]: I0107 10:04:09.775001 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463004-qdmtn" event={"ID":"5556d222-d67e-4aea-b62c-864c0ea52ad2","Type":"ContainerDied","Data":"aa0e2cfcae903df8e64949f9774bb3afe65f49cc9468bc7429f56de52dfb88d3"} Jan 07 10:04:11 crc kubenswrapper[5131]: I0107 10:04:11.004555 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463004-qdmtn" Jan 07 10:04:11 crc kubenswrapper[5131]: I0107 10:04:11.026191 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rsq6\" (UniqueName: \"kubernetes.io/projected/5556d222-d67e-4aea-b62c-864c0ea52ad2-kube-api-access-8rsq6\") pod \"5556d222-d67e-4aea-b62c-864c0ea52ad2\" (UID: \"5556d222-d67e-4aea-b62c-864c0ea52ad2\") " Jan 07 10:04:11 crc kubenswrapper[5131]: I0107 10:04:11.031922 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5556d222-d67e-4aea-b62c-864c0ea52ad2-kube-api-access-8rsq6" (OuterVolumeSpecName: "kube-api-access-8rsq6") pod "5556d222-d67e-4aea-b62c-864c0ea52ad2" (UID: "5556d222-d67e-4aea-b62c-864c0ea52ad2"). InnerVolumeSpecName "kube-api-access-8rsq6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:04:11 crc kubenswrapper[5131]: I0107 10:04:11.128392 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8rsq6\" (UniqueName: \"kubernetes.io/projected/5556d222-d67e-4aea-b62c-864c0ea52ad2-kube-api-access-8rsq6\") on node \"crc\" DevicePath \"\"" Jan 07 10:04:11 crc kubenswrapper[5131]: I0107 10:04:11.788005 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463004-qdmtn" event={"ID":"5556d222-d67e-4aea-b62c-864c0ea52ad2","Type":"ContainerDied","Data":"32e4e7e5b2d73c0a2969e33a69cb7a6c362052d267398b3dedd0deeeaadff8ed"} Jan 07 10:04:11 crc kubenswrapper[5131]: I0107 10:04:11.788054 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e4e7e5b2d73c0a2969e33a69cb7a6c362052d267398b3dedd0deeeaadff8ed" Jan 07 10:04:11 crc kubenswrapper[5131]: I0107 10:04:11.788115 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463004-qdmtn" Jan 07 10:04:12 crc kubenswrapper[5131]: I0107 10:04:12.070663 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29462998-sj8hm"] Jan 07 10:04:12 crc kubenswrapper[5131]: I0107 10:04:12.075039 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29462998-sj8hm"] Jan 07 10:04:12 crc kubenswrapper[5131]: I0107 10:04:12.189754 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c2319e5-acee-42d1-8d43-b3bddb18f996" path="/var/lib/kubelet/pods/1c2319e5-acee-42d1-8d43-b3bddb18f996/volumes" Jan 07 10:04:32 crc kubenswrapper[5131]: I0107 10:04:32.608964 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:04:32 crc kubenswrapper[5131]: I0107 10:04:32.612619 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:04:32 crc kubenswrapper[5131]: I0107 10:04:32.615920 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 10:04:32 crc kubenswrapper[5131]: I0107 10:04:32.621380 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 10:04:33 crc kubenswrapper[5131]: I0107 10:04:33.603164 5131 scope.go:117] "RemoveContainer" containerID="adfa71f8b7265e1000e225847369c7e3f9c04acb7e3c76e4944fa28536eafe5d" Jan 07 10:05:31 crc kubenswrapper[5131]: I0107 10:05:31.409679 5131 generic.go:358] "Generic (PLEG): container finished" podID="6bdce6a8-6c59-4eb5-9724-cacf299dcd90" containerID="ad85792d83882832142cdbd89bb04ca3d411e4f3e876f422d934fd6db9d52bb1" exitCode=0 Jan 07 10:05:31 crc kubenswrapper[5131]: I0107 10:05:31.409818 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6bdce6a8-6c59-4eb5-9724-cacf299dcd90","Type":"ContainerDied","Data":"ad85792d83882832142cdbd89bb04ca3d411e4f3e876f422d934fd6db9d52bb1"} Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.712878 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.867989 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnpmn\" (UniqueName: \"kubernetes.io/projected/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-kube-api-access-lnpmn\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868049 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-ca-bundles\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868086 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildworkdir\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868111 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-node-pullsecrets\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868170 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-proxy-ca-bundles\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868255 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-pull\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868275 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868332 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-blob-cache\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868374 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-root\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868407 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-run\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868434 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-system-configs\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868452 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildcachedir\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868556 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-push\") pod \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\" (UID: \"6bdce6a8-6c59-4eb5-9724-cacf299dcd90\") " Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868668 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868866 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.868883 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.869252 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.869709 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.870671 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.874468 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-kube-api-access-lnpmn" (OuterVolumeSpecName: "kube-api-access-lnpmn") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "kube-api-access-lnpmn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.874714 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.874761 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.875365 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.916672 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.970323 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.970351 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.970360 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.970370 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lnpmn\" (UniqueName: \"kubernetes.io/projected/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-kube-api-access-lnpmn\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.970378 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.970385 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.970393 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:32 crc kubenswrapper[5131]: I0107 10:05:32.970401 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:33 crc kubenswrapper[5131]: I0107 10:05:33.082894 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:05:33 crc kubenswrapper[5131]: I0107 10:05:33.173632 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:33 crc kubenswrapper[5131]: I0107 10:05:33.434398 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 07 10:05:33 crc kubenswrapper[5131]: I0107 10:05:33.434383 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6bdce6a8-6c59-4eb5-9724-cacf299dcd90","Type":"ContainerDied","Data":"cf80d7661b6fb478f5337015bdcee513599f6a7fa69e0dd3e0ffc9ce96dc6ea9"} Jan 07 10:05:33 crc kubenswrapper[5131]: I0107 10:05:33.434891 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf80d7661b6fb478f5337015bdcee513599f6a7fa69e0dd3e0ffc9ce96dc6ea9" Jan 07 10:05:35 crc kubenswrapper[5131]: I0107 10:05:35.144497 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6bdce6a8-6c59-4eb5-9724-cacf299dcd90" (UID: "6bdce6a8-6c59-4eb5-9724-cacf299dcd90"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:05:35 crc kubenswrapper[5131]: I0107 10:05:35.209488 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6bdce6a8-6c59-4eb5-9724-cacf299dcd90-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.608217 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.609889 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6bdce6a8-6c59-4eb5-9724-cacf299dcd90" containerName="git-clone" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.609916 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bdce6a8-6c59-4eb5-9724-cacf299dcd90" containerName="git-clone" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.609931 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6bdce6a8-6c59-4eb5-9724-cacf299dcd90" containerName="manage-dockerfile" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.609945 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bdce6a8-6c59-4eb5-9724-cacf299dcd90" containerName="manage-dockerfile" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.609961 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5556d222-d67e-4aea-b62c-864c0ea52ad2" containerName="oc" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.609983 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="5556d222-d67e-4aea-b62c-864c0ea52ad2" containerName="oc" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.610130 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6bdce6a8-6c59-4eb5-9724-cacf299dcd90" containerName="docker-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.610145 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bdce6a8-6c59-4eb5-9724-cacf299dcd90" containerName="docker-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.610355 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="5556d222-d67e-4aea-b62c-864c0ea52ad2" containerName="oc" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.610377 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="6bdce6a8-6c59-4eb5-9724-cacf299dcd90" containerName="docker-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.615144 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.618501 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-sys-config\"" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.618536 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-ca\"" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.618554 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-global-ca\"" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.619348 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.629645 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.643699 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.643740 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.643781 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vqr\" (UniqueName: \"kubernetes.io/projected/eddb10f4-8873-4951-9c05-41cb9d2fb31a-kube-api-access-s8vqr\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.643818 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.643928 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.644018 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.644164 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.644321 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.644422 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.644526 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.644631 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.644703 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.745740 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.746149 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.746297 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.746366 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.746443 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.746518 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.746545 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.746638 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.746988 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.746988 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.747151 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.747179 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.747184 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.747260 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.747303 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.747312 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.747411 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s8vqr\" (UniqueName: \"kubernetes.io/projected/eddb10f4-8873-4951-9c05-41cb9d2fb31a-kube-api-access-s8vqr\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.747433 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.747784 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.747961 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.748114 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.754090 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.754227 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.778461 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8vqr\" (UniqueName: \"kubernetes.io/projected/eddb10f4-8873-4951-9c05-41cb9d2fb31a-kube-api-access-s8vqr\") pod \"smart-gateway-operator-1-build\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:37 crc kubenswrapper[5131]: I0107 10:05:37.949984 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:38 crc kubenswrapper[5131]: I0107 10:05:38.187193 5131 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 10:05:38 crc kubenswrapper[5131]: I0107 10:05:38.190523 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 07 10:05:38 crc kubenswrapper[5131]: I0107 10:05:38.478584 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"eddb10f4-8873-4951-9c05-41cb9d2fb31a","Type":"ContainerStarted","Data":"916729f4a37d1d82db54cfad9a47e8ded4a2f65fbaf6d323bd7d0b2cdd80ba4c"} Jan 07 10:05:39 crc kubenswrapper[5131]: I0107 10:05:39.489786 5131 generic.go:358] "Generic (PLEG): container finished" podID="eddb10f4-8873-4951-9c05-41cb9d2fb31a" containerID="f47fa6a876db21ebadc5812195a746c47be9a1b144b51165b70d0c570faba7ca" exitCode=0 Jan 07 10:05:39 crc kubenswrapper[5131]: I0107 10:05:39.490043 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"eddb10f4-8873-4951-9c05-41cb9d2fb31a","Type":"ContainerDied","Data":"f47fa6a876db21ebadc5812195a746c47be9a1b144b51165b70d0c570faba7ca"} Jan 07 10:05:40 crc kubenswrapper[5131]: I0107 10:05:40.502344 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"eddb10f4-8873-4951-9c05-41cb9d2fb31a","Type":"ContainerStarted","Data":"f00ca7f9d4072a79984b9a57ed5e2f9b5fb4e8b87129d582696077895b94064f"} Jan 07 10:05:40 crc kubenswrapper[5131]: I0107 10:05:40.531679 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.531648991 podStartE2EDuration="3.531648991s" podCreationTimestamp="2026-01-07 10:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:05:40.525064418 +0000 UTC m=+968.691365982" watchObservedRunningTime="2026-01-07 10:05:40.531648991 +0000 UTC m=+968.697950595" Jan 07 10:05:48 crc kubenswrapper[5131]: I0107 10:05:48.315784 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 07 10:05:48 crc kubenswrapper[5131]: I0107 10:05:48.316649 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="eddb10f4-8873-4951-9c05-41cb9d2fb31a" containerName="docker-build" containerID="cri-o://f00ca7f9d4072a79984b9a57ed5e2f9b5fb4e8b87129d582696077895b94064f" gracePeriod=30 Jan 07 10:05:49 crc kubenswrapper[5131]: I0107 10:05:49.980801 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.802919 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.808783 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-sys-config\"" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.808826 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-global-ca\"" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.809531 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-ca\"" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.819005 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.984538 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.984603 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.984627 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.984670 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.984704 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.984721 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.984825 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.984900 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.984960 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.985373 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlqxh\" (UniqueName: \"kubernetes.io/projected/65a291b6-fbd4-4fed-ae92-622e0120c857-kube-api-access-dlqxh\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.985583 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:52 crc kubenswrapper[5131]: I0107 10:05:52.985621 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.086596 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.086682 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.086745 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.086801 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.086878 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.086966 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.087014 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.087054 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.087217 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.087273 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.087398 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.087476 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dlqxh\" (UniqueName: \"kubernetes.io/projected/65a291b6-fbd4-4fed-ae92-622e0120c857-kube-api-access-dlqxh\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.088356 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.088459 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.088488 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.089167 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.089577 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.089811 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.089867 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.090299 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.091775 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.094423 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.105042 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.119344 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlqxh\" (UniqueName: \"kubernetes.io/projected/65a291b6-fbd4-4fed-ae92-622e0120c857-kube-api-access-dlqxh\") pod \"smart-gateway-operator-2-build\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:53 crc kubenswrapper[5131]: I0107 10:05:53.127961 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.286903 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.296166 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_eddb10f4-8873-4951-9c05-41cb9d2fb31a/docker-build/0.log" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.297040 5131 generic.go:358] "Generic (PLEG): container finished" podID="eddb10f4-8873-4951-9c05-41cb9d2fb31a" containerID="f00ca7f9d4072a79984b9a57ed5e2f9b5fb4e8b87129d582696077895b94064f" exitCode=1 Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.297098 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"eddb10f4-8873-4951-9c05-41cb9d2fb31a","Type":"ContainerDied","Data":"f00ca7f9d4072a79984b9a57ed5e2f9b5fb4e8b87129d582696077895b94064f"} Jan 07 10:05:55 crc kubenswrapper[5131]: W0107 10:05:55.326966 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65a291b6_fbd4_4fed_ae92_622e0120c857.slice/crio-995f228089db6bf208fef75570878bb619e3d50b2f611f1aa3b0e0973e6540ec WatchSource:0}: Error finding container 995f228089db6bf208fef75570878bb619e3d50b2f611f1aa3b0e0973e6540ec: Status 404 returned error can't find the container with id 995f228089db6bf208fef75570878bb619e3d50b2f611f1aa3b0e0973e6540ec Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.672616 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_eddb10f4-8873-4951-9c05-41cb9d2fb31a/docker-build/0.log" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.673791 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.727549 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-proxy-ca-bundles\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.727598 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-root\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.727626 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-system-configs\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.727679 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-blob-cache\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.727732 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-pull\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.727750 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-ca-bundles\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.727768 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildworkdir\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.727841 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8vqr\" (UniqueName: \"kubernetes.io/projected/eddb10f4-8873-4951-9c05-41cb9d2fb31a-kube-api-access-s8vqr\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.727905 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-run\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.727944 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-push\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.727974 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-node-pullsecrets\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.728448 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.728700 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.728844 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.728973 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildcachedir\") pod \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\" (UID: \"eddb10f4-8873-4951-9c05-41cb9d2fb31a\") " Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.729063 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.729150 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.729299 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.729402 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.729419 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.729431 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.729442 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.729453 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eddb10f4-8873-4951-9c05-41cb9d2fb31a-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.729463 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.729811 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.731914 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.734993 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.736823 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eddb10f4-8873-4951-9c05-41cb9d2fb31a-kube-api-access-s8vqr" (OuterVolumeSpecName: "kube-api-access-s8vqr") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "kube-api-access-s8vqr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.737022 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.830822 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.830870 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s8vqr\" (UniqueName: \"kubernetes.io/projected/eddb10f4-8873-4951-9c05-41cb9d2fb31a-kube-api-access-s8vqr\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.830882 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.830893 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/eddb10f4-8873-4951-9c05-41cb9d2fb31a-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.830906 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.881062 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "eddb10f4-8873-4951-9c05-41cb9d2fb31a" (UID: "eddb10f4-8873-4951-9c05-41cb9d2fb31a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:05:55 crc kubenswrapper[5131]: I0107 10:05:55.931948 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eddb10f4-8873-4951-9c05-41cb9d2fb31a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:05:56 crc kubenswrapper[5131]: I0107 10:05:56.305310 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"65a291b6-fbd4-4fed-ae92-622e0120c857","Type":"ContainerStarted","Data":"f009ef647ea5de763dd48042b48892176456857d6268cf5ccc61197107cc7c60"} Jan 07 10:05:56 crc kubenswrapper[5131]: I0107 10:05:56.305613 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"65a291b6-fbd4-4fed-ae92-622e0120c857","Type":"ContainerStarted","Data":"995f228089db6bf208fef75570878bb619e3d50b2f611f1aa3b0e0973e6540ec"} Jan 07 10:05:56 crc kubenswrapper[5131]: I0107 10:05:56.307471 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_eddb10f4-8873-4951-9c05-41cb9d2fb31a/docker-build/0.log" Jan 07 10:05:56 crc kubenswrapper[5131]: I0107 10:05:56.308087 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 07 10:05:56 crc kubenswrapper[5131]: I0107 10:05:56.308300 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"eddb10f4-8873-4951-9c05-41cb9d2fb31a","Type":"ContainerDied","Data":"916729f4a37d1d82db54cfad9a47e8ded4a2f65fbaf6d323bd7d0b2cdd80ba4c"} Jan 07 10:05:56 crc kubenswrapper[5131]: I0107 10:05:56.308494 5131 scope.go:117] "RemoveContainer" containerID="f00ca7f9d4072a79984b9a57ed5e2f9b5fb4e8b87129d582696077895b94064f" Jan 07 10:05:56 crc kubenswrapper[5131]: I0107 10:05:56.374656 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 07 10:05:56 crc kubenswrapper[5131]: I0107 10:05:56.380660 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 07 10:05:56 crc kubenswrapper[5131]: I0107 10:05:56.403073 5131 scope.go:117] "RemoveContainer" containerID="f47fa6a876db21ebadc5812195a746c47be9a1b144b51165b70d0c570faba7ca" Jan 07 10:05:57 crc kubenswrapper[5131]: I0107 10:05:57.318647 5131 generic.go:358] "Generic (PLEG): container finished" podID="65a291b6-fbd4-4fed-ae92-622e0120c857" containerID="f009ef647ea5de763dd48042b48892176456857d6268cf5ccc61197107cc7c60" exitCode=0 Jan 07 10:05:57 crc kubenswrapper[5131]: I0107 10:05:57.318734 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"65a291b6-fbd4-4fed-ae92-622e0120c857","Type":"ContainerDied","Data":"f009ef647ea5de763dd48042b48892176456857d6268cf5ccc61197107cc7c60"} Jan 07 10:05:58 crc kubenswrapper[5131]: I0107 10:05:58.188081 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eddb10f4-8873-4951-9c05-41cb9d2fb31a" path="/var/lib/kubelet/pods/eddb10f4-8873-4951-9c05-41cb9d2fb31a/volumes" Jan 07 10:05:58 crc kubenswrapper[5131]: I0107 10:05:58.330087 5131 generic.go:358] "Generic (PLEG): container finished" podID="65a291b6-fbd4-4fed-ae92-622e0120c857" containerID="fc3b874355c8dc8fbe066ff789c44c9b3db2c8e6997c79d6543edd68a2c371e7" exitCode=0 Jan 07 10:05:58 crc kubenswrapper[5131]: I0107 10:05:58.330909 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"65a291b6-fbd4-4fed-ae92-622e0120c857","Type":"ContainerDied","Data":"fc3b874355c8dc8fbe066ff789c44c9b3db2c8e6997c79d6543edd68a2c371e7"} Jan 07 10:05:58 crc kubenswrapper[5131]: I0107 10:05:58.374067 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_65a291b6-fbd4-4fed-ae92-622e0120c857/manage-dockerfile/0.log" Jan 07 10:05:59 crc kubenswrapper[5131]: I0107 10:05:59.348141 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"65a291b6-fbd4-4fed-ae92-622e0120c857","Type":"ContainerStarted","Data":"bf09f73c6694c1ac9cdeac2c149e32373565da793aac6188ca1c8b3410941221"} Jan 07 10:05:59 crc kubenswrapper[5131]: I0107 10:05:59.399502 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=10.399474189 podStartE2EDuration="10.399474189s" podCreationTimestamp="2026-01-07 10:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:05:59.39302963 +0000 UTC m=+987.559331234" watchObservedRunningTime="2026-01-07 10:05:59.399474189 +0000 UTC m=+987.565775783" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.134120 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463006-ks8hg"] Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.134765 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eddb10f4-8873-4951-9c05-41cb9d2fb31a" containerName="manage-dockerfile" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.134783 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddb10f4-8873-4951-9c05-41cb9d2fb31a" containerName="manage-dockerfile" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.134795 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eddb10f4-8873-4951-9c05-41cb9d2fb31a" containerName="docker-build" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.134800 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddb10f4-8873-4951-9c05-41cb9d2fb31a" containerName="docker-build" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.134925 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="eddb10f4-8873-4951-9c05-41cb9d2fb31a" containerName="docker-build" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.290783 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463006-ks8hg"] Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.290922 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463006-ks8hg" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.293875 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.295010 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.299563 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.401748 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hjth\" (UniqueName: \"kubernetes.io/projected/7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3-kube-api-access-2hjth\") pod \"auto-csr-approver-29463006-ks8hg\" (UID: \"7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3\") " pod="openshift-infra/auto-csr-approver-29463006-ks8hg" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.503063 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2hjth\" (UniqueName: \"kubernetes.io/projected/7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3-kube-api-access-2hjth\") pod \"auto-csr-approver-29463006-ks8hg\" (UID: \"7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3\") " pod="openshift-infra/auto-csr-approver-29463006-ks8hg" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.531280 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hjth\" (UniqueName: \"kubernetes.io/projected/7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3-kube-api-access-2hjth\") pod \"auto-csr-approver-29463006-ks8hg\" (UID: \"7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3\") " pod="openshift-infra/auto-csr-approver-29463006-ks8hg" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.615558 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463006-ks8hg" Jan 07 10:06:00 crc kubenswrapper[5131]: I0107 10:06:00.831731 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463006-ks8hg"] Jan 07 10:06:00 crc kubenswrapper[5131]: W0107 10:06:00.837211 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7afc1fba_f972_4fd7_adc5_ce2af0f7f6c3.slice/crio-ea1c3f25dab4a2062dd20df01fa741f8ff7a5fdb8fc30e9b0a496d0571923dfd WatchSource:0}: Error finding container ea1c3f25dab4a2062dd20df01fa741f8ff7a5fdb8fc30e9b0a496d0571923dfd: Status 404 returned error can't find the container with id ea1c3f25dab4a2062dd20df01fa741f8ff7a5fdb8fc30e9b0a496d0571923dfd Jan 07 10:06:01 crc kubenswrapper[5131]: I0107 10:06:01.364530 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463006-ks8hg" event={"ID":"7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3","Type":"ContainerStarted","Data":"ea1c3f25dab4a2062dd20df01fa741f8ff7a5fdb8fc30e9b0a496d0571923dfd"} Jan 07 10:06:02 crc kubenswrapper[5131]: I0107 10:06:02.371717 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463006-ks8hg" event={"ID":"7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3","Type":"ContainerStarted","Data":"57f9209ac00616c2c1bc7ccfbad15dd8dcc0b0893e1acee47dc16602a94e3ab8"} Jan 07 10:06:02 crc kubenswrapper[5131]: I0107 10:06:02.387106 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29463006-ks8hg" podStartSLOduration=1.375998075 podStartE2EDuration="2.387090271s" podCreationTimestamp="2026-01-07 10:06:00 +0000 UTC" firstStartedPulling="2026-01-07 10:06:00.838486637 +0000 UTC m=+989.004788201" lastFinishedPulling="2026-01-07 10:06:01.849578833 +0000 UTC m=+990.015880397" observedRunningTime="2026-01-07 10:06:02.382078098 +0000 UTC m=+990.548379672" watchObservedRunningTime="2026-01-07 10:06:02.387090271 +0000 UTC m=+990.553391835" Jan 07 10:06:03 crc kubenswrapper[5131]: I0107 10:06:03.379096 5131 generic.go:358] "Generic (PLEG): container finished" podID="7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3" containerID="57f9209ac00616c2c1bc7ccfbad15dd8dcc0b0893e1acee47dc16602a94e3ab8" exitCode=0 Jan 07 10:06:03 crc kubenswrapper[5131]: I0107 10:06:03.379307 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463006-ks8hg" event={"ID":"7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3","Type":"ContainerDied","Data":"57f9209ac00616c2c1bc7ccfbad15dd8dcc0b0893e1acee47dc16602a94e3ab8"} Jan 07 10:06:04 crc kubenswrapper[5131]: I0107 10:06:04.674775 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463006-ks8hg" Jan 07 10:06:04 crc kubenswrapper[5131]: I0107 10:06:04.758710 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hjth\" (UniqueName: \"kubernetes.io/projected/7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3-kube-api-access-2hjth\") pod \"7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3\" (UID: \"7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3\") " Jan 07 10:06:04 crc kubenswrapper[5131]: I0107 10:06:04.767373 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3-kube-api-access-2hjth" (OuterVolumeSpecName: "kube-api-access-2hjth") pod "7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3" (UID: "7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3"). InnerVolumeSpecName "kube-api-access-2hjth". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:06:04 crc kubenswrapper[5131]: I0107 10:06:04.860379 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2hjth\" (UniqueName: \"kubernetes.io/projected/7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3-kube-api-access-2hjth\") on node \"crc\" DevicePath \"\"" Jan 07 10:06:05 crc kubenswrapper[5131]: I0107 10:06:05.259632 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463000-2q7zm"] Jan 07 10:06:05 crc kubenswrapper[5131]: I0107 10:06:05.268785 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463000-2q7zm"] Jan 07 10:06:05 crc kubenswrapper[5131]: I0107 10:06:05.396798 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463006-ks8hg" event={"ID":"7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3","Type":"ContainerDied","Data":"ea1c3f25dab4a2062dd20df01fa741f8ff7a5fdb8fc30e9b0a496d0571923dfd"} Jan 07 10:06:05 crc kubenswrapper[5131]: I0107 10:06:05.396897 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea1c3f25dab4a2062dd20df01fa741f8ff7a5fdb8fc30e9b0a496d0571923dfd" Jan 07 10:06:05 crc kubenswrapper[5131]: I0107 10:06:05.396816 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463006-ks8hg" Jan 07 10:06:06 crc kubenswrapper[5131]: I0107 10:06:06.187346 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d07f612e-1eb1-4386-936a-12fec40a84d2" path="/var/lib/kubelet/pods/d07f612e-1eb1-4386-936a-12fec40a84d2/volumes" Jan 07 10:06:20 crc kubenswrapper[5131]: I0107 10:06:20.663641 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:06:20 crc kubenswrapper[5131]: I0107 10:06:20.664568 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:06:33 crc kubenswrapper[5131]: I0107 10:06:33.742825 5131 scope.go:117] "RemoveContainer" containerID="a8b1b2b71911c1e028a62910b91917425f2697eceb43a024e5f795f9223fde60" Jan 07 10:06:50 crc kubenswrapper[5131]: I0107 10:06:50.663377 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:06:50 crc kubenswrapper[5131]: I0107 10:06:50.664133 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:07:11 crc kubenswrapper[5131]: I0107 10:07:11.940195 5131 generic.go:358] "Generic (PLEG): container finished" podID="65a291b6-fbd4-4fed-ae92-622e0120c857" containerID="bf09f73c6694c1ac9cdeac2c149e32373565da793aac6188ca1c8b3410941221" exitCode=0 Jan 07 10:07:11 crc kubenswrapper[5131]: I0107 10:07:11.940304 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"65a291b6-fbd4-4fed-ae92-622e0120c857","Type":"ContainerDied","Data":"bf09f73c6694c1ac9cdeac2c149e32373565da793aac6188ca1c8b3410941221"} Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.232756 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.330324 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-buildworkdir\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.330374 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-system-configs\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.330422 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-push\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.330449 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-buildcachedir\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.330485 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-build-blob-cache\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.330533 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-root\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.330549 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-proxy-ca-bundles\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.330571 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-node-pullsecrets\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.330597 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-run\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.330604 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.330656 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.331388 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.331422 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-pull\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.331493 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlqxh\" (UniqueName: \"kubernetes.io/projected/65a291b6-fbd4-4fed-ae92-622e0120c857-kube-api-access-dlqxh\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.331519 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-ca-bundles\") pod \"65a291b6-fbd4-4fed-ae92-622e0120c857\" (UID: \"65a291b6-fbd4-4fed-ae92-622e0120c857\") " Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.332039 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.332056 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.332068 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65a291b6-fbd4-4fed-ae92-622e0120c857-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.332476 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.332779 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.333133 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.336864 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a291b6-fbd4-4fed-ae92-622e0120c857-kube-api-access-dlqxh" (OuterVolumeSpecName: "kube-api-access-dlqxh") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "kube-api-access-dlqxh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.337097 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.338778 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.341187 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.435721 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.435966 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.435983 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dlqxh\" (UniqueName: \"kubernetes.io/projected/65a291b6-fbd4-4fed-ae92-622e0120c857-kube-api-access-dlqxh\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.435996 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.436008 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.436058 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/65a291b6-fbd4-4fed-ae92-622e0120c857-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.436069 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/65a291b6-fbd4-4fed-ae92-622e0120c857-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.515288 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.537519 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.957147 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.957151 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"65a291b6-fbd4-4fed-ae92-622e0120c857","Type":"ContainerDied","Data":"995f228089db6bf208fef75570878bb619e3d50b2f611f1aa3b0e0973e6540ec"} Jan 07 10:07:13 crc kubenswrapper[5131]: I0107 10:07:13.957254 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="995f228089db6bf208fef75570878bb619e3d50b2f611f1aa3b0e0973e6540ec" Jan 07 10:07:15 crc kubenswrapper[5131]: I0107 10:07:15.546068 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "65a291b6-fbd4-4fed-ae92-622e0120c857" (UID: "65a291b6-fbd4-4fed-ae92-622e0120c857"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:07:15 crc kubenswrapper[5131]: I0107 10:07:15.574448 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/65a291b6-fbd4-4fed-ae92-622e0120c857-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.206319 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.208748 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="65a291b6-fbd4-4fed-ae92-622e0120c857" containerName="docker-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.208797 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a291b6-fbd4-4fed-ae92-622e0120c857" containerName="docker-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.208863 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3" containerName="oc" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.208884 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3" containerName="oc" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.208911 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="65a291b6-fbd4-4fed-ae92-622e0120c857" containerName="git-clone" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.208926 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a291b6-fbd4-4fed-ae92-622e0120c857" containerName="git-clone" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.209007 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="65a291b6-fbd4-4fed-ae92-622e0120c857" containerName="manage-dockerfile" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.209023 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a291b6-fbd4-4fed-ae92-622e0120c857" containerName="manage-dockerfile" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.209240 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3" containerName="oc" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.209273 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="65a291b6-fbd4-4fed-ae92-622e0120c857" containerName="docker-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.244711 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.244890 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.246860 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-ca\"" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.246999 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-global-ca\"" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.247022 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-sys-config\"" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.248041 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.413531 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-root\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.413603 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.413676 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildworkdir\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.413712 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-system-configs\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.413755 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.413790 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzlbr\" (UniqueName: \"kubernetes.io/projected/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-kube-api-access-wzlbr\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.414081 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.414168 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-pull\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.414219 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildcachedir\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.414256 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.414546 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-push\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.414596 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-run\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.516614 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.516690 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildworkdir\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.516735 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-system-configs\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.516798 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.516902 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlbr\" (UniqueName: \"kubernetes.io/projected/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-kube-api-access-wzlbr\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.517459 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.517515 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.517488 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildworkdir\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.517609 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-pull\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.517652 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildcachedir\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.517643 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.517681 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.517723 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-push\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.517730 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-system-configs\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.517788 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildcachedir\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.517797 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-run\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.518025 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-root\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.518668 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-root\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.518942 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.519018 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-run\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.519203 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.527380 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-push\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.533247 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-pull\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.549165 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzlbr\" (UniqueName: \"kubernetes.io/projected/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-kube-api-access-wzlbr\") pod \"sg-core-1-build\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " pod="service-telemetry/sg-core-1-build" Jan 07 10:07:18 crc kubenswrapper[5131]: I0107 10:07:18.563052 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 07 10:07:19 crc kubenswrapper[5131]: I0107 10:07:19.038819 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 07 10:07:20 crc kubenswrapper[5131]: I0107 10:07:20.016678 5131 generic.go:358] "Generic (PLEG): container finished" podID="5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" containerID="519e31fd1f95850d51ec7ce4f5b9aa960eb855756129bb2e49e23e1a419f76e7" exitCode=0 Jan 07 10:07:20 crc kubenswrapper[5131]: I0107 10:07:20.016790 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2","Type":"ContainerDied","Data":"519e31fd1f95850d51ec7ce4f5b9aa960eb855756129bb2e49e23e1a419f76e7"} Jan 07 10:07:20 crc kubenswrapper[5131]: I0107 10:07:20.017217 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2","Type":"ContainerStarted","Data":"b18925bc1eecf5618f4c2df3b49b7f9b2f5a201d53bcc5b16f56f4e68b436185"} Jan 07 10:07:20 crc kubenswrapper[5131]: I0107 10:07:20.663952 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:07:20 crc kubenswrapper[5131]: I0107 10:07:20.664055 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:07:20 crc kubenswrapper[5131]: I0107 10:07:20.664119 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 10:07:20 crc kubenswrapper[5131]: I0107 10:07:20.664945 5131 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"763e1eb5460745b4cb7278fb5c4fbd6802295fde5df336a494e758ddf511ec87"} pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 10:07:20 crc kubenswrapper[5131]: I0107 10:07:20.665016 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" containerID="cri-o://763e1eb5460745b4cb7278fb5c4fbd6802295fde5df336a494e758ddf511ec87" gracePeriod=600 Jan 07 10:07:21 crc kubenswrapper[5131]: I0107 10:07:21.032031 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2","Type":"ContainerStarted","Data":"798c4d4c6721973c4b0a9cdd1d4bf19f747cdc7403b6d2e70720110c1eddb8a7"} Jan 07 10:07:21 crc kubenswrapper[5131]: I0107 10:07:21.036107 5131 generic.go:358] "Generic (PLEG): container finished" podID="3942e752-44ba-4678-8723-6cd778e60d73" containerID="763e1eb5460745b4cb7278fb5c4fbd6802295fde5df336a494e758ddf511ec87" exitCode=0 Jan 07 10:07:21 crc kubenswrapper[5131]: I0107 10:07:21.036150 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerDied","Data":"763e1eb5460745b4cb7278fb5c4fbd6802295fde5df336a494e758ddf511ec87"} Jan 07 10:07:21 crc kubenswrapper[5131]: I0107 10:07:21.036236 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"6a3199d24d9f75069e3d6ef18dc98d686384b2b6b4a377d2ed0dde963838ac1e"} Jan 07 10:07:21 crc kubenswrapper[5131]: I0107 10:07:21.036256 5131 scope.go:117] "RemoveContainer" containerID="7b5fd7c41683ca17dd95a35646c53ce725c855bc5bff2a2030ae596afb470eb2" Jan 07 10:07:21 crc kubenswrapper[5131]: I0107 10:07:21.079895 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.079865234 podStartE2EDuration="3.079865234s" podCreationTimestamp="2026-01-07 10:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:07:21.0691382 +0000 UTC m=+1069.235439804" watchObservedRunningTime="2026-01-07 10:07:21.079865234 +0000 UTC m=+1069.246166818" Jan 07 10:07:28 crc kubenswrapper[5131]: I0107 10:07:28.653460 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 07 10:07:28 crc kubenswrapper[5131]: I0107 10:07:28.654432 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" containerName="docker-build" containerID="cri-o://798c4d4c6721973c4b0a9cdd1d4bf19f747cdc7403b6d2e70720110c1eddb8a7" gracePeriod=30 Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.106454 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_5e6843e7-ae0e-4aba-b0e3-f605beb81ea2/docker-build/0.log" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.109046 5131 generic.go:358] "Generic (PLEG): container finished" podID="5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" containerID="798c4d4c6721973c4b0a9cdd1d4bf19f747cdc7403b6d2e70720110c1eddb8a7" exitCode=1 Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.109141 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2","Type":"ContainerDied","Data":"798c4d4c6721973c4b0a9cdd1d4bf19f747cdc7403b6d2e70720110c1eddb8a7"} Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.234093 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_5e6843e7-ae0e-4aba-b0e3-f605beb81ea2/docker-build/0.log" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.234808 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286031 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-node-pullsecrets\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286110 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-blob-cache\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286155 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-ca-bundles\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286217 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-push\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286323 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-root\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286350 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzlbr\" (UniqueName: \"kubernetes.io/projected/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-kube-api-access-wzlbr\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286449 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-pull\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286506 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildworkdir\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286539 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildcachedir\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286607 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-system-configs\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286649 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-run\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.286673 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-proxy-ca-bundles\") pod \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\" (UID: \"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2\") " Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.288050 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.288499 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.288594 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.288644 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.288937 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.289271 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.290719 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.295723 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-kube-api-access-wzlbr" (OuterVolumeSpecName: "kube-api-access-wzlbr") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "kube-api-access-wzlbr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.296357 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.296571 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.373712 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.388622 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wzlbr\" (UniqueName: \"kubernetes.io/projected/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-kube-api-access-wzlbr\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.388649 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.388660 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.388670 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.388679 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.388689 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.388698 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.388706 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.388713 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.388721 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.388731 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.428769 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" (UID: "5e6843e7-ae0e-4aba-b0e3-f605beb81ea2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:07:29 crc kubenswrapper[5131]: I0107 10:07:29.490786 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.120735 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_5e6843e7-ae0e-4aba-b0e3-f605beb81ea2/docker-build/0.log" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.121767 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.121764 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"5e6843e7-ae0e-4aba-b0e3-f605beb81ea2","Type":"ContainerDied","Data":"b18925bc1eecf5618f4c2df3b49b7f9b2f5a201d53bcc5b16f56f4e68b436185"} Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.122017 5131 scope.go:117] "RemoveContainer" containerID="798c4d4c6721973c4b0a9cdd1d4bf19f747cdc7403b6d2e70720110c1eddb8a7" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.193168 5131 scope.go:117] "RemoveContainer" containerID="519e31fd1f95850d51ec7ce4f5b9aa960eb855756129bb2e49e23e1a419f76e7" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.201084 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.201124 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.309682 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.310819 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" containerName="docker-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.310882 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" containerName="docker-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.310909 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" containerName="manage-dockerfile" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.310923 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" containerName="manage-dockerfile" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.311102 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" containerName="docker-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.343482 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.343761 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.346051 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-sys-config\"" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.346321 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-ca\"" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.346535 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-global-ca\"" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.346591 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.403951 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.404112 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildcachedir\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.404153 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.404211 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-pull\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.404248 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-run\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.404309 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.404348 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.404417 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-push\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.404491 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-system-configs\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.404533 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-root\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.404658 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildworkdir\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.404722 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv6l2\" (UniqueName: \"kubernetes.io/projected/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-kube-api-access-kv6l2\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.505937 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-system-configs\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.506008 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-root\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.506048 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildworkdir\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.506079 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kv6l2\" (UniqueName: \"kubernetes.io/projected/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-kube-api-access-kv6l2\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.506135 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.506196 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildcachedir\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.506231 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.506291 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-pull\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.506321 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-run\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.506377 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.506782 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.506884 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-push\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.507731 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-system-configs\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.507824 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildcachedir\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.507967 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.508124 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildworkdir\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.508602 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-root\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.508917 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-run\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.509052 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.509616 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.510510 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.513877 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-push\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.520359 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-pull\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.539933 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv6l2\" (UniqueName: \"kubernetes.io/projected/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-kube-api-access-kv6l2\") pod \"sg-core-2-build\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.669645 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 07 10:07:30 crc kubenswrapper[5131]: I0107 10:07:30.980167 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 07 10:07:30 crc kubenswrapper[5131]: W0107 10:07:30.988552 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb633a95a_5d3c_4174_9ce0_71bd7d6feba7.slice/crio-01e9887d9cb4c1a9c6aa63824f14aa119f8977dd6c1eced7e309c57bd8eafd44 WatchSource:0}: Error finding container 01e9887d9cb4c1a9c6aa63824f14aa119f8977dd6c1eced7e309c57bd8eafd44: Status 404 returned error can't find the container with id 01e9887d9cb4c1a9c6aa63824f14aa119f8977dd6c1eced7e309c57bd8eafd44 Jan 07 10:07:31 crc kubenswrapper[5131]: I0107 10:07:31.132054 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"b633a95a-5d3c-4174-9ce0-71bd7d6feba7","Type":"ContainerStarted","Data":"01e9887d9cb4c1a9c6aa63824f14aa119f8977dd6c1eced7e309c57bd8eafd44"} Jan 07 10:07:32 crc kubenswrapper[5131]: I0107 10:07:32.138826 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"b633a95a-5d3c-4174-9ce0-71bd7d6feba7","Type":"ContainerStarted","Data":"a4aafa05df7d84aa88098b87fb46ff41b7df594f3678bd93ee02b1ce327de2ad"} Jan 07 10:07:32 crc kubenswrapper[5131]: I0107 10:07:32.204533 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6843e7-ae0e-4aba-b0e3-f605beb81ea2" path="/var/lib/kubelet/pods/5e6843e7-ae0e-4aba-b0e3-f605beb81ea2/volumes" Jan 07 10:07:33 crc kubenswrapper[5131]: I0107 10:07:33.146885 5131 generic.go:358] "Generic (PLEG): container finished" podID="b633a95a-5d3c-4174-9ce0-71bd7d6feba7" containerID="a4aafa05df7d84aa88098b87fb46ff41b7df594f3678bd93ee02b1ce327de2ad" exitCode=0 Jan 07 10:07:33 crc kubenswrapper[5131]: I0107 10:07:33.146976 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"b633a95a-5d3c-4174-9ce0-71bd7d6feba7","Type":"ContainerDied","Data":"a4aafa05df7d84aa88098b87fb46ff41b7df594f3678bd93ee02b1ce327de2ad"} Jan 07 10:07:34 crc kubenswrapper[5131]: I0107 10:07:34.160986 5131 generic.go:358] "Generic (PLEG): container finished" podID="b633a95a-5d3c-4174-9ce0-71bd7d6feba7" containerID="994732274bcc1b94e5577101c9c004ef5d34acba70f78197ac593bedb2321625" exitCode=0 Jan 07 10:07:34 crc kubenswrapper[5131]: I0107 10:07:34.161050 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"b633a95a-5d3c-4174-9ce0-71bd7d6feba7","Type":"ContainerDied","Data":"994732274bcc1b94e5577101c9c004ef5d34acba70f78197ac593bedb2321625"} Jan 07 10:07:34 crc kubenswrapper[5131]: I0107 10:07:34.209435 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_b633a95a-5d3c-4174-9ce0-71bd7d6feba7/manage-dockerfile/0.log" Jan 07 10:07:35 crc kubenswrapper[5131]: I0107 10:07:35.172638 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"b633a95a-5d3c-4174-9ce0-71bd7d6feba7","Type":"ContainerStarted","Data":"c1c7988197ed6f45c8eded22f1267de293dc15f37ef6d042a9051c4ad247212f"} Jan 07 10:07:35 crc kubenswrapper[5131]: I0107 10:07:35.205553 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.20552819 podStartE2EDuration="5.20552819s" podCreationTimestamp="2026-01-07 10:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:07:35.195262538 +0000 UTC m=+1083.361564132" watchObservedRunningTime="2026-01-07 10:07:35.20552819 +0000 UTC m=+1083.371829794" Jan 07 10:08:00 crc kubenswrapper[5131]: I0107 10:08:00.146196 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463008-qzfcq"] Jan 07 10:08:00 crc kubenswrapper[5131]: I0107 10:08:00.769704 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463008-qzfcq"] Jan 07 10:08:00 crc kubenswrapper[5131]: I0107 10:08:00.770081 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463008-qzfcq" Jan 07 10:08:00 crc kubenswrapper[5131]: I0107 10:08:00.773724 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:08:00 crc kubenswrapper[5131]: I0107 10:08:00.774239 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:08:00 crc kubenswrapper[5131]: I0107 10:08:00.775175 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:08:00 crc kubenswrapper[5131]: I0107 10:08:00.891813 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kdj\" (UniqueName: \"kubernetes.io/projected/c0cfe355-971c-4d53-99ab-77e026860934-kube-api-access-z2kdj\") pod \"auto-csr-approver-29463008-qzfcq\" (UID: \"c0cfe355-971c-4d53-99ab-77e026860934\") " pod="openshift-infra/auto-csr-approver-29463008-qzfcq" Jan 07 10:08:00 crc kubenswrapper[5131]: I0107 10:08:00.993134 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kdj\" (UniqueName: \"kubernetes.io/projected/c0cfe355-971c-4d53-99ab-77e026860934-kube-api-access-z2kdj\") pod \"auto-csr-approver-29463008-qzfcq\" (UID: \"c0cfe355-971c-4d53-99ab-77e026860934\") " pod="openshift-infra/auto-csr-approver-29463008-qzfcq" Jan 07 10:08:01 crc kubenswrapper[5131]: I0107 10:08:01.017465 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kdj\" (UniqueName: \"kubernetes.io/projected/c0cfe355-971c-4d53-99ab-77e026860934-kube-api-access-z2kdj\") pod \"auto-csr-approver-29463008-qzfcq\" (UID: \"c0cfe355-971c-4d53-99ab-77e026860934\") " pod="openshift-infra/auto-csr-approver-29463008-qzfcq" Jan 07 10:08:01 crc kubenswrapper[5131]: I0107 10:08:01.100342 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463008-qzfcq" Jan 07 10:08:01 crc kubenswrapper[5131]: I0107 10:08:01.330400 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463008-qzfcq"] Jan 07 10:08:01 crc kubenswrapper[5131]: W0107 10:08:01.337674 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0cfe355_971c_4d53_99ab_77e026860934.slice/crio-805377a1a541422df8813f61989317bc7308a7188b952fb2fbec368164e921ae WatchSource:0}: Error finding container 805377a1a541422df8813f61989317bc7308a7188b952fb2fbec368164e921ae: Status 404 returned error can't find the container with id 805377a1a541422df8813f61989317bc7308a7188b952fb2fbec368164e921ae Jan 07 10:08:01 crc kubenswrapper[5131]: I0107 10:08:01.406517 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463008-qzfcq" event={"ID":"c0cfe355-971c-4d53-99ab-77e026860934","Type":"ContainerStarted","Data":"805377a1a541422df8813f61989317bc7308a7188b952fb2fbec368164e921ae"} Jan 07 10:08:03 crc kubenswrapper[5131]: I0107 10:08:03.423987 5131 generic.go:358] "Generic (PLEG): container finished" podID="c0cfe355-971c-4d53-99ab-77e026860934" containerID="1c9cce502c9f1f1da04380a3ddb3ee24d0aded9a623539e893df31f1217c2255" exitCode=0 Jan 07 10:08:03 crc kubenswrapper[5131]: I0107 10:08:03.424101 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463008-qzfcq" event={"ID":"c0cfe355-971c-4d53-99ab-77e026860934","Type":"ContainerDied","Data":"1c9cce502c9f1f1da04380a3ddb3ee24d0aded9a623539e893df31f1217c2255"} Jan 07 10:08:04 crc kubenswrapper[5131]: I0107 10:08:04.720558 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463008-qzfcq" Jan 07 10:08:04 crc kubenswrapper[5131]: I0107 10:08:04.847713 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2kdj\" (UniqueName: \"kubernetes.io/projected/c0cfe355-971c-4d53-99ab-77e026860934-kube-api-access-z2kdj\") pod \"c0cfe355-971c-4d53-99ab-77e026860934\" (UID: \"c0cfe355-971c-4d53-99ab-77e026860934\") " Jan 07 10:08:04 crc kubenswrapper[5131]: I0107 10:08:04.853882 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0cfe355-971c-4d53-99ab-77e026860934-kube-api-access-z2kdj" (OuterVolumeSpecName: "kube-api-access-z2kdj") pod "c0cfe355-971c-4d53-99ab-77e026860934" (UID: "c0cfe355-971c-4d53-99ab-77e026860934"). InnerVolumeSpecName "kube-api-access-z2kdj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:08:04 crc kubenswrapper[5131]: I0107 10:08:04.948630 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z2kdj\" (UniqueName: \"kubernetes.io/projected/c0cfe355-971c-4d53-99ab-77e026860934-kube-api-access-z2kdj\") on node \"crc\" DevicePath \"\"" Jan 07 10:08:05 crc kubenswrapper[5131]: I0107 10:08:05.440391 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463008-qzfcq" Jan 07 10:08:05 crc kubenswrapper[5131]: I0107 10:08:05.440439 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463008-qzfcq" event={"ID":"c0cfe355-971c-4d53-99ab-77e026860934","Type":"ContainerDied","Data":"805377a1a541422df8813f61989317bc7308a7188b952fb2fbec368164e921ae"} Jan 07 10:08:05 crc kubenswrapper[5131]: I0107 10:08:05.440498 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="805377a1a541422df8813f61989317bc7308a7188b952fb2fbec368164e921ae" Jan 07 10:08:05 crc kubenswrapper[5131]: I0107 10:08:05.794324 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463002-5t29p"] Jan 07 10:08:05 crc kubenswrapper[5131]: I0107 10:08:05.798566 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463002-5t29p"] Jan 07 10:08:06 crc kubenswrapper[5131]: I0107 10:08:06.189279 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="641ea204-fc48-4482-bf9e-1d45e8b8e7c7" path="/var/lib/kubelet/pods/641ea204-fc48-4482-bf9e-1d45e8b8e7c7/volumes" Jan 07 10:08:33 crc kubenswrapper[5131]: I0107 10:08:33.896300 5131 scope.go:117] "RemoveContainer" containerID="537dfc5f6fabc7eb3fec8b77b6c5aff88389c15c2de6afc009ff2ee054bfe24d" Jan 07 10:09:20 crc kubenswrapper[5131]: I0107 10:09:20.663543 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:09:20 crc kubenswrapper[5131]: I0107 10:09:20.664388 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:09:32 crc kubenswrapper[5131]: I0107 10:09:32.733425 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:09:32 crc kubenswrapper[5131]: I0107 10:09:32.735078 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:09:32 crc kubenswrapper[5131]: I0107 10:09:32.738957 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 10:09:32 crc kubenswrapper[5131]: I0107 10:09:32.740277 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 10:09:50 crc kubenswrapper[5131]: I0107 10:09:50.807082 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:09:50 crc kubenswrapper[5131]: I0107 10:09:50.807778 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.151464 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463010-h8fm9"] Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.170136 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c0cfe355-971c-4d53-99ab-77e026860934" containerName="oc" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.170190 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0cfe355-971c-4d53-99ab-77e026860934" containerName="oc" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.172428 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="c0cfe355-971c-4d53-99ab-77e026860934" containerName="oc" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.227929 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463010-h8fm9"] Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.228125 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463010-h8fm9" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.230827 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.231158 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.231679 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.345301 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdb5r\" (UniqueName: \"kubernetes.io/projected/5e840dc4-d123-4300-b964-e41fab140d92-kube-api-access-rdb5r\") pod \"auto-csr-approver-29463010-h8fm9\" (UID: \"5e840dc4-d123-4300-b964-e41fab140d92\") " pod="openshift-infra/auto-csr-approver-29463010-h8fm9" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.447684 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rdb5r\" (UniqueName: \"kubernetes.io/projected/5e840dc4-d123-4300-b964-e41fab140d92-kube-api-access-rdb5r\") pod \"auto-csr-approver-29463010-h8fm9\" (UID: \"5e840dc4-d123-4300-b964-e41fab140d92\") " pod="openshift-infra/auto-csr-approver-29463010-h8fm9" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.467214 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdb5r\" (UniqueName: \"kubernetes.io/projected/5e840dc4-d123-4300-b964-e41fab140d92-kube-api-access-rdb5r\") pod \"auto-csr-approver-29463010-h8fm9\" (UID: \"5e840dc4-d123-4300-b964-e41fab140d92\") " pod="openshift-infra/auto-csr-approver-29463010-h8fm9" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.549521 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463010-h8fm9" Jan 07 10:10:00 crc kubenswrapper[5131]: I0107 10:10:00.817097 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463010-h8fm9"] Jan 07 10:10:01 crc kubenswrapper[5131]: I0107 10:10:01.311536 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463010-h8fm9" event={"ID":"5e840dc4-d123-4300-b964-e41fab140d92","Type":"ContainerStarted","Data":"fe5264748972b6ef936311f459085796da45ccb4f9ce4d79560a63fc1b1ff980"} Jan 07 10:10:07 crc kubenswrapper[5131]: I0107 10:10:07.372907 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463010-h8fm9" event={"ID":"5e840dc4-d123-4300-b964-e41fab140d92","Type":"ContainerStarted","Data":"3b4c65dbc24e307a05bf3fa51b6a48cb6142b0e61dcbbaa514e8312044c730b6"} Jan 07 10:10:07 crc kubenswrapper[5131]: I0107 10:10:07.390895 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29463010-h8fm9" podStartSLOduration=1.251901022 podStartE2EDuration="7.390881575s" podCreationTimestamp="2026-01-07 10:10:00 +0000 UTC" firstStartedPulling="2026-01-07 10:10:00.821168015 +0000 UTC m=+1228.987469589" lastFinishedPulling="2026-01-07 10:10:06.960148538 +0000 UTC m=+1235.126450142" observedRunningTime="2026-01-07 10:10:07.387649097 +0000 UTC m=+1235.553950651" watchObservedRunningTime="2026-01-07 10:10:07.390881575 +0000 UTC m=+1235.557183139" Jan 07 10:10:08 crc kubenswrapper[5131]: I0107 10:10:08.384749 5131 generic.go:358] "Generic (PLEG): container finished" podID="5e840dc4-d123-4300-b964-e41fab140d92" containerID="3b4c65dbc24e307a05bf3fa51b6a48cb6142b0e61dcbbaa514e8312044c730b6" exitCode=0 Jan 07 10:10:08 crc kubenswrapper[5131]: I0107 10:10:08.384878 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463010-h8fm9" event={"ID":"5e840dc4-d123-4300-b964-e41fab140d92","Type":"ContainerDied","Data":"3b4c65dbc24e307a05bf3fa51b6a48cb6142b0e61dcbbaa514e8312044c730b6"} Jan 07 10:10:09 crc kubenswrapper[5131]: I0107 10:10:09.654261 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463010-h8fm9" Jan 07 10:10:09 crc kubenswrapper[5131]: I0107 10:10:09.704491 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdb5r\" (UniqueName: \"kubernetes.io/projected/5e840dc4-d123-4300-b964-e41fab140d92-kube-api-access-rdb5r\") pod \"5e840dc4-d123-4300-b964-e41fab140d92\" (UID: \"5e840dc4-d123-4300-b964-e41fab140d92\") " Jan 07 10:10:09 crc kubenswrapper[5131]: I0107 10:10:09.712216 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e840dc4-d123-4300-b964-e41fab140d92-kube-api-access-rdb5r" (OuterVolumeSpecName: "kube-api-access-rdb5r") pod "5e840dc4-d123-4300-b964-e41fab140d92" (UID: "5e840dc4-d123-4300-b964-e41fab140d92"). InnerVolumeSpecName "kube-api-access-rdb5r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:10:09 crc kubenswrapper[5131]: I0107 10:10:09.806590 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rdb5r\" (UniqueName: \"kubernetes.io/projected/5e840dc4-d123-4300-b964-e41fab140d92-kube-api-access-rdb5r\") on node \"crc\" DevicePath \"\"" Jan 07 10:10:10 crc kubenswrapper[5131]: I0107 10:10:10.399874 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463010-h8fm9" event={"ID":"5e840dc4-d123-4300-b964-e41fab140d92","Type":"ContainerDied","Data":"fe5264748972b6ef936311f459085796da45ccb4f9ce4d79560a63fc1b1ff980"} Jan 07 10:10:10 crc kubenswrapper[5131]: I0107 10:10:10.399917 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe5264748972b6ef936311f459085796da45ccb4f9ce4d79560a63fc1b1ff980" Jan 07 10:10:10 crc kubenswrapper[5131]: I0107 10:10:10.399931 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463010-h8fm9" Jan 07 10:10:10 crc kubenswrapper[5131]: I0107 10:10:10.455318 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463004-qdmtn"] Jan 07 10:10:10 crc kubenswrapper[5131]: I0107 10:10:10.461975 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463004-qdmtn"] Jan 07 10:10:12 crc kubenswrapper[5131]: I0107 10:10:12.189060 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5556d222-d67e-4aea-b62c-864c0ea52ad2" path="/var/lib/kubelet/pods/5556d222-d67e-4aea-b62c-864c0ea52ad2/volumes" Jan 07 10:10:20 crc kubenswrapper[5131]: I0107 10:10:20.662827 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:10:20 crc kubenswrapper[5131]: I0107 10:10:20.663208 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:10:20 crc kubenswrapper[5131]: I0107 10:10:20.663279 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 10:10:20 crc kubenswrapper[5131]: I0107 10:10:20.663928 5131 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a3199d24d9f75069e3d6ef18dc98d686384b2b6b4a377d2ed0dde963838ac1e"} pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 10:10:20 crc kubenswrapper[5131]: I0107 10:10:20.664006 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" containerID="cri-o://6a3199d24d9f75069e3d6ef18dc98d686384b2b6b4a377d2ed0dde963838ac1e" gracePeriod=600 Jan 07 10:10:21 crc kubenswrapper[5131]: I0107 10:10:21.485202 5131 generic.go:358] "Generic (PLEG): container finished" podID="3942e752-44ba-4678-8723-6cd778e60d73" containerID="6a3199d24d9f75069e3d6ef18dc98d686384b2b6b4a377d2ed0dde963838ac1e" exitCode=0 Jan 07 10:10:21 crc kubenswrapper[5131]: I0107 10:10:21.485299 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerDied","Data":"6a3199d24d9f75069e3d6ef18dc98d686384b2b6b4a377d2ed0dde963838ac1e"} Jan 07 10:10:21 crc kubenswrapper[5131]: I0107 10:10:21.485933 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"13b258610a3045e67e9e5de274b918c4da88f0376e2747328b51f3ef9deaf0e0"} Jan 07 10:10:21 crc kubenswrapper[5131]: I0107 10:10:21.485981 5131 scope.go:117] "RemoveContainer" containerID="763e1eb5460745b4cb7278fb5c4fbd6802295fde5df336a494e758ddf511ec87" Jan 07 10:10:34 crc kubenswrapper[5131]: I0107 10:10:34.018650 5131 scope.go:117] "RemoveContainer" containerID="aa0e2cfcae903df8e64949f9774bb3afe65f49cc9468bc7429f56de52dfb88d3" Jan 07 10:11:16 crc kubenswrapper[5131]: I0107 10:11:16.122519 5131 trace.go:236] Trace[1349689254]: "Calculate volume metrics of container-storage-root for pod service-telemetry/sg-core-2-build" (07-Jan-2026 10:11:15.096) (total time: 1025ms): Jan 07 10:11:16 crc kubenswrapper[5131]: Trace[1349689254]: [1.025911795s] [1.025911795s] END Jan 07 10:11:21 crc kubenswrapper[5131]: I0107 10:11:21.926935 5131 generic.go:358] "Generic (PLEG): container finished" podID="b633a95a-5d3c-4174-9ce0-71bd7d6feba7" containerID="c1c7988197ed6f45c8eded22f1267de293dc15f37ef6d042a9051c4ad247212f" exitCode=0 Jan 07 10:11:21 crc kubenswrapper[5131]: I0107 10:11:21.927517 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"b633a95a-5d3c-4174-9ce0-71bd7d6feba7","Type":"ContainerDied","Data":"c1c7988197ed6f45c8eded22f1267de293dc15f37ef6d042a9051c4ad247212f"} Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.212160 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.340664 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildcachedir\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.340741 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-pull\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.340767 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-system-configs\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.340773 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.340856 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildworkdir\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.340896 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-root\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.340926 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-proxy-ca-bundles\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.340961 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-ca-bundles\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.340983 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv6l2\" (UniqueName: \"kubernetes.io/projected/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-kube-api-access-kv6l2\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.341017 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-blob-cache\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.341072 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-push\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.341119 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-node-pullsecrets\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.341517 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.341952 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.341982 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.342245 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.342440 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-run\") pod \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\" (UID: \"b633a95a-5d3c-4174-9ce0-71bd7d6feba7\") " Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.342742 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.342766 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.342780 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.342794 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.342806 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.344032 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.351443 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.351558 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-kube-api-access-kv6l2" (OuterVolumeSpecName: "kube-api-access-kv6l2") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "kube-api-access-kv6l2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.351755 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.351896 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.444357 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.444387 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.444395 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.444403 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.444412 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kv6l2\" (UniqueName: \"kubernetes.io/projected/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-kube-api-access-kv6l2\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.728620 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.748341 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.944317 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.944327 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"b633a95a-5d3c-4174-9ce0-71bd7d6feba7","Type":"ContainerDied","Data":"01e9887d9cb4c1a9c6aa63824f14aa119f8977dd6c1eced7e309c57bd8eafd44"} Jan 07 10:11:23 crc kubenswrapper[5131]: I0107 10:11:23.944367 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01e9887d9cb4c1a9c6aa63824f14aa119f8977dd6c1eced7e309c57bd8eafd44" Jan 07 10:11:26 crc kubenswrapper[5131]: I0107 10:11:26.157148 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b633a95a-5d3c-4174-9ce0-71bd7d6feba7" (UID: "b633a95a-5d3c-4174-9ce0-71bd7d6feba7"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:11:26 crc kubenswrapper[5131]: I0107 10:11:26.182312 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b633a95a-5d3c-4174-9ce0-71bd7d6feba7-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.496934 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.497937 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b633a95a-5d3c-4174-9ce0-71bd7d6feba7" containerName="git-clone" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.497953 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="b633a95a-5d3c-4174-9ce0-71bd7d6feba7" containerName="git-clone" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.497975 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e840dc4-d123-4300-b964-e41fab140d92" containerName="oc" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.497983 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e840dc4-d123-4300-b964-e41fab140d92" containerName="oc" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.497996 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b633a95a-5d3c-4174-9ce0-71bd7d6feba7" containerName="manage-dockerfile" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.498003 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="b633a95a-5d3c-4174-9ce0-71bd7d6feba7" containerName="manage-dockerfile" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.498023 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b633a95a-5d3c-4174-9ce0-71bd7d6feba7" containerName="docker-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.498029 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="b633a95a-5d3c-4174-9ce0-71bd7d6feba7" containerName="docker-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.498170 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="b633a95a-5d3c-4174-9ce0-71bd7d6feba7" containerName="docker-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.498184 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="5e840dc4-d123-4300-b964-e41fab140d92" containerName="oc" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.632556 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.632711 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.636226 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-sys-config\"" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.636366 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.636381 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-ca\"" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.637529 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-global-ca\"" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.822598 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.822660 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55z45\" (UniqueName: \"kubernetes.io/projected/00244a4a-fa85-46ba-a6bd-37722a995e0e-kube-api-access-55z45\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.822700 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.822728 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.822811 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.822864 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.823001 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.823072 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.823107 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-push\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.823342 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-pull\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.823467 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.823617 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.925978 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.926144 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.927246 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.927150 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.927344 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.927458 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.927597 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.927727 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.927788 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.929111 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.929168 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-push\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.929279 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-pull\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.929328 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.929389 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.929442 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.929524 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-55z45\" (UniqueName: \"kubernetes.io/projected/00244a4a-fa85-46ba-a6bd-37722a995e0e-kube-api-access-55z45\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.929612 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.931267 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.931397 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.931658 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.931897 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.937455 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-pull\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.937866 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-push\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:28 crc kubenswrapper[5131]: I0107 10:11:28.965445 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-55z45\" (UniqueName: \"kubernetes.io/projected/00244a4a-fa85-46ba-a6bd-37722a995e0e-kube-api-access-55z45\") pod \"sg-bridge-1-build\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:29 crc kubenswrapper[5131]: I0107 10:11:29.248402 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:29 crc kubenswrapper[5131]: I0107 10:11:29.536863 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 07 10:11:29 crc kubenswrapper[5131]: I0107 10:11:29.546337 5131 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 10:11:29 crc kubenswrapper[5131]: I0107 10:11:29.996939 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"00244a4a-fa85-46ba-a6bd-37722a995e0e","Type":"ContainerStarted","Data":"ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e"} Jan 07 10:11:29 crc kubenswrapper[5131]: I0107 10:11:29.996999 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"00244a4a-fa85-46ba-a6bd-37722a995e0e","Type":"ContainerStarted","Data":"c2c7efc367fba2c5ef1161b927166828d040de5c83a9b9a638c831cda7d7a97e"} Jan 07 10:11:31 crc kubenswrapper[5131]: I0107 10:11:31.007035 5131 generic.go:358] "Generic (PLEG): container finished" podID="00244a4a-fa85-46ba-a6bd-37722a995e0e" containerID="ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e" exitCode=0 Jan 07 10:11:31 crc kubenswrapper[5131]: I0107 10:11:31.007296 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"00244a4a-fa85-46ba-a6bd-37722a995e0e","Type":"ContainerDied","Data":"ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e"} Jan 07 10:11:32 crc kubenswrapper[5131]: I0107 10:11:32.020857 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"00244a4a-fa85-46ba-a6bd-37722a995e0e","Type":"ContainerStarted","Data":"73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a"} Jan 07 10:11:32 crc kubenswrapper[5131]: I0107 10:11:32.061205 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=4.061186686 podStartE2EDuration="4.061186686s" podCreationTimestamp="2026-01-07 10:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:11:32.059442305 +0000 UTC m=+1320.225743909" watchObservedRunningTime="2026-01-07 10:11:32.061186686 +0000 UTC m=+1320.227488260" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.133403 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.134240 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="00244a4a-fa85-46ba-a6bd-37722a995e0e" containerName="docker-build" containerID="cri-o://73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a" gracePeriod=30 Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.668571 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_00244a4a-fa85-46ba-a6bd-37722a995e0e/docker-build/0.log" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.669536 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.717456 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildcachedir\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.717563 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-root\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.717686 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.717728 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-blob-cache\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.717922 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-pull\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.718027 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-node-pullsecrets\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.718190 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildworkdir\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.718214 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-system-configs\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.718196 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.718232 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-ca-bundles\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.718397 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-push\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.718454 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-run\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.718558 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-proxy-ca-bundles\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.718634 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55z45\" (UniqueName: \"kubernetes.io/projected/00244a4a-fa85-46ba-a6bd-37722a995e0e-kube-api-access-55z45\") pod \"00244a4a-fa85-46ba-a6bd-37722a995e0e\" (UID: \"00244a4a-fa85-46ba-a6bd-37722a995e0e\") " Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.718916 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.719427 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.719516 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.719586 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.719884 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.719941 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.719966 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.719987 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.720006 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.720025 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/00244a4a-fa85-46ba-a6bd-37722a995e0e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.720302 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.722634 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.725719 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.726560 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00244a4a-fa85-46ba-a6bd-37722a995e0e-kube-api-access-55z45" (OuterVolumeSpecName: "kube-api-access-55z45") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "kube-api-access-55z45". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.737196 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.788697 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "00244a4a-fa85-46ba-a6bd-37722a995e0e" (UID: "00244a4a-fa85-46ba-a6bd-37722a995e0e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.821014 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.821059 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.821072 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.821085 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/00244a4a-fa85-46ba-a6bd-37722a995e0e-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.821132 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/00244a4a-fa85-46ba-a6bd-37722a995e0e-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:39 crc kubenswrapper[5131]: I0107 10:11:39.821144 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-55z45\" (UniqueName: \"kubernetes.io/projected/00244a4a-fa85-46ba-a6bd-37722a995e0e-kube-api-access-55z45\") on node \"crc\" DevicePath \"\"" Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.086080 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_00244a4a-fa85-46ba-a6bd-37722a995e0e/docker-build/0.log" Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.086862 5131 generic.go:358] "Generic (PLEG): container finished" podID="00244a4a-fa85-46ba-a6bd-37722a995e0e" containerID="73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a" exitCode=1 Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.086929 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"00244a4a-fa85-46ba-a6bd-37722a995e0e","Type":"ContainerDied","Data":"73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a"} Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.086984 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.087011 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"00244a4a-fa85-46ba-a6bd-37722a995e0e","Type":"ContainerDied","Data":"c2c7efc367fba2c5ef1161b927166828d040de5c83a9b9a638c831cda7d7a97e"} Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.087068 5131 scope.go:117] "RemoveContainer" containerID="73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a" Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.153881 5131 scope.go:117] "RemoveContainer" containerID="ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e" Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.156817 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.165782 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.189186 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00244a4a-fa85-46ba-a6bd-37722a995e0e" path="/var/lib/kubelet/pods/00244a4a-fa85-46ba-a6bd-37722a995e0e/volumes" Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.235190 5131 scope.go:117] "RemoveContainer" containerID="73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a" Jan 07 10:11:40 crc kubenswrapper[5131]: E0107 10:11:40.235923 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a\": container with ID starting with 73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a not found: ID does not exist" containerID="73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a" Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.236055 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a"} err="failed to get container status \"73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a\": rpc error: code = NotFound desc = could not find container \"73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a\": container with ID starting with 73e81399681019fb63d5e4086a40b9c4732653c1c4d131b236b83139c1e9e30a not found: ID does not exist" Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.236155 5131 scope.go:117] "RemoveContainer" containerID="ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e" Jan 07 10:11:40 crc kubenswrapper[5131]: E0107 10:11:40.239393 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e\": container with ID starting with ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e not found: ID does not exist" containerID="ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e" Jan 07 10:11:40 crc kubenswrapper[5131]: I0107 10:11:40.239487 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e"} err="failed to get container status \"ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e\": rpc error: code = NotFound desc = could not find container \"ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e\": container with ID starting with ea4999411cb66efa2a17581ccef379317f2657e97b53165c1fdab2d42565516e not found: ID does not exist" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.207178 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.210008 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="00244a4a-fa85-46ba-a6bd-37722a995e0e" containerName="manage-dockerfile" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.210240 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="00244a4a-fa85-46ba-a6bd-37722a995e0e" containerName="manage-dockerfile" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.210426 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="00244a4a-fa85-46ba-a6bd-37722a995e0e" containerName="docker-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.210581 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="00244a4a-fa85-46ba-a6bd-37722a995e0e" containerName="docker-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.211107 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="00244a4a-fa85-46ba-a6bd-37722a995e0e" containerName="docker-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.223930 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.226350 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.227032 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-sys-config\"" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.227109 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-global-ca\"" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.227799 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.228705 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-ca\"" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.343122 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.343224 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.343282 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.343433 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.343512 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.343603 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-push\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.343651 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgq7\" (UniqueName: \"kubernetes.io/projected/45641b95-68b2-45cb-aafb-64ab77f33a27-kube-api-access-stgq7\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.343726 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-pull\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.343792 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.343888 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.344052 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.344085 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.445970 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-pull\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446044 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446077 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446114 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446134 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446207 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446240 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446272 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446329 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446360 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446423 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-push\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446449 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-stgq7\" (UniqueName: \"kubernetes.io/projected/45641b95-68b2-45cb-aafb-64ab77f33a27-kube-api-access-stgq7\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.446938 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.447127 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.447129 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.447393 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.447585 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.447888 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.448091 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.448496 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.449227 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.455316 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-push\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.461056 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-pull\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.478154 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-stgq7\" (UniqueName: \"kubernetes.io/projected/45641b95-68b2-45cb-aafb-64ab77f33a27-kube-api-access-stgq7\") pod \"sg-bridge-2-build\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.552195 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 07 10:11:41 crc kubenswrapper[5131]: I0107 10:11:41.894797 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 07 10:11:42 crc kubenswrapper[5131]: I0107 10:11:42.110950 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"45641b95-68b2-45cb-aafb-64ab77f33a27","Type":"ContainerStarted","Data":"2604c7abf53fbdb1d6b755a8fa2ffff2049da5bab3211b358c282eb040f1ddea"} Jan 07 10:11:43 crc kubenswrapper[5131]: I0107 10:11:43.122111 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"45641b95-68b2-45cb-aafb-64ab77f33a27","Type":"ContainerStarted","Data":"64f901bf1af1c3ea453a4dac82488a7e50a6d816f303e18dd64cff30ac4a024b"} Jan 07 10:11:44 crc kubenswrapper[5131]: I0107 10:11:44.131796 5131 generic.go:358] "Generic (PLEG): container finished" podID="45641b95-68b2-45cb-aafb-64ab77f33a27" containerID="64f901bf1af1c3ea453a4dac82488a7e50a6d816f303e18dd64cff30ac4a024b" exitCode=0 Jan 07 10:11:44 crc kubenswrapper[5131]: I0107 10:11:44.131911 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"45641b95-68b2-45cb-aafb-64ab77f33a27","Type":"ContainerDied","Data":"64f901bf1af1c3ea453a4dac82488a7e50a6d816f303e18dd64cff30ac4a024b"} Jan 07 10:11:45 crc kubenswrapper[5131]: I0107 10:11:45.144919 5131 generic.go:358] "Generic (PLEG): container finished" podID="45641b95-68b2-45cb-aafb-64ab77f33a27" containerID="568c3e3351c42e8d4562a064ac44e3b3a701c5d728cf07bb46ae22cf169b89aa" exitCode=0 Jan 07 10:11:45 crc kubenswrapper[5131]: I0107 10:11:45.145016 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"45641b95-68b2-45cb-aafb-64ab77f33a27","Type":"ContainerDied","Data":"568c3e3351c42e8d4562a064ac44e3b3a701c5d728cf07bb46ae22cf169b89aa"} Jan 07 10:11:45 crc kubenswrapper[5131]: I0107 10:11:45.180096 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_45641b95-68b2-45cb-aafb-64ab77f33a27/manage-dockerfile/0.log" Jan 07 10:11:46 crc kubenswrapper[5131]: I0107 10:11:46.157381 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"45641b95-68b2-45cb-aafb-64ab77f33a27","Type":"ContainerStarted","Data":"6c4b5bf102f8bf81cc562b32a644b2a0b5160b3568b3c88e3fb4a095491a822b"} Jan 07 10:11:46 crc kubenswrapper[5131]: I0107 10:11:46.200414 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.200388634 podStartE2EDuration="5.200388634s" podCreationTimestamp="2026-01-07 10:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:11:46.193087453 +0000 UTC m=+1334.359389037" watchObservedRunningTime="2026-01-07 10:11:46.200388634 +0000 UTC m=+1334.366690228" Jan 07 10:12:00 crc kubenswrapper[5131]: I0107 10:12:00.145726 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463012-26ldf"] Jan 07 10:12:00 crc kubenswrapper[5131]: I0107 10:12:00.252710 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463012-26ldf"] Jan 07 10:12:00 crc kubenswrapper[5131]: I0107 10:12:00.252973 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463012-26ldf" Jan 07 10:12:00 crc kubenswrapper[5131]: I0107 10:12:00.257455 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:12:00 crc kubenswrapper[5131]: I0107 10:12:00.257739 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:12:00 crc kubenswrapper[5131]: I0107 10:12:00.258082 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:12:00 crc kubenswrapper[5131]: I0107 10:12:00.338882 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf4q9\" (UniqueName: \"kubernetes.io/projected/ec70975e-67f8-46e9-9c01-3d1050806e82-kube-api-access-lf4q9\") pod \"auto-csr-approver-29463012-26ldf\" (UID: \"ec70975e-67f8-46e9-9c01-3d1050806e82\") " pod="openshift-infra/auto-csr-approver-29463012-26ldf" Jan 07 10:12:00 crc kubenswrapper[5131]: I0107 10:12:00.440939 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lf4q9\" (UniqueName: \"kubernetes.io/projected/ec70975e-67f8-46e9-9c01-3d1050806e82-kube-api-access-lf4q9\") pod \"auto-csr-approver-29463012-26ldf\" (UID: \"ec70975e-67f8-46e9-9c01-3d1050806e82\") " pod="openshift-infra/auto-csr-approver-29463012-26ldf" Jan 07 10:12:00 crc kubenswrapper[5131]: I0107 10:12:00.464101 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf4q9\" (UniqueName: \"kubernetes.io/projected/ec70975e-67f8-46e9-9c01-3d1050806e82-kube-api-access-lf4q9\") pod \"auto-csr-approver-29463012-26ldf\" (UID: \"ec70975e-67f8-46e9-9c01-3d1050806e82\") " pod="openshift-infra/auto-csr-approver-29463012-26ldf" Jan 07 10:12:00 crc kubenswrapper[5131]: I0107 10:12:00.579220 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463012-26ldf" Jan 07 10:12:00 crc kubenswrapper[5131]: I0107 10:12:00.883970 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463012-26ldf"] Jan 07 10:12:00 crc kubenswrapper[5131]: W0107 10:12:00.888567 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec70975e_67f8_46e9_9c01_3d1050806e82.slice/crio-ac04c567af0e0e4de8edc04288d80d3ef500857af5216682dea1eff96365121d WatchSource:0}: Error finding container ac04c567af0e0e4de8edc04288d80d3ef500857af5216682dea1eff96365121d: Status 404 returned error can't find the container with id ac04c567af0e0e4de8edc04288d80d3ef500857af5216682dea1eff96365121d Jan 07 10:12:01 crc kubenswrapper[5131]: I0107 10:12:01.271423 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463012-26ldf" event={"ID":"ec70975e-67f8-46e9-9c01-3d1050806e82","Type":"ContainerStarted","Data":"ac04c567af0e0e4de8edc04288d80d3ef500857af5216682dea1eff96365121d"} Jan 07 10:12:02 crc kubenswrapper[5131]: I0107 10:12:02.278485 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463012-26ldf" event={"ID":"ec70975e-67f8-46e9-9c01-3d1050806e82","Type":"ContainerStarted","Data":"b72377165b1e2e109687a3c5644c7cf63622723ed2ade5378196c0d0d382ba5a"} Jan 07 10:12:02 crc kubenswrapper[5131]: I0107 10:12:02.294519 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29463012-26ldf" podStartSLOduration=1.348927766 podStartE2EDuration="2.29450107s" podCreationTimestamp="2026-01-07 10:12:00 +0000 UTC" firstStartedPulling="2026-01-07 10:12:00.890605559 +0000 UTC m=+1349.056907163" lastFinishedPulling="2026-01-07 10:12:01.836178893 +0000 UTC m=+1350.002480467" observedRunningTime="2026-01-07 10:12:02.288877709 +0000 UTC m=+1350.455179283" watchObservedRunningTime="2026-01-07 10:12:02.29450107 +0000 UTC m=+1350.460802654" Jan 07 10:12:03 crc kubenswrapper[5131]: I0107 10:12:03.289997 5131 generic.go:358] "Generic (PLEG): container finished" podID="ec70975e-67f8-46e9-9c01-3d1050806e82" containerID="b72377165b1e2e109687a3c5644c7cf63622723ed2ade5378196c0d0d382ba5a" exitCode=0 Jan 07 10:12:03 crc kubenswrapper[5131]: I0107 10:12:03.290263 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463012-26ldf" event={"ID":"ec70975e-67f8-46e9-9c01-3d1050806e82","Type":"ContainerDied","Data":"b72377165b1e2e109687a3c5644c7cf63622723ed2ade5378196c0d0d382ba5a"} Jan 07 10:12:04 crc kubenswrapper[5131]: I0107 10:12:04.590073 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463012-26ldf" Jan 07 10:12:04 crc kubenswrapper[5131]: I0107 10:12:04.699868 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf4q9\" (UniqueName: \"kubernetes.io/projected/ec70975e-67f8-46e9-9c01-3d1050806e82-kube-api-access-lf4q9\") pod \"ec70975e-67f8-46e9-9c01-3d1050806e82\" (UID: \"ec70975e-67f8-46e9-9c01-3d1050806e82\") " Jan 07 10:12:04 crc kubenswrapper[5131]: I0107 10:12:04.705796 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec70975e-67f8-46e9-9c01-3d1050806e82-kube-api-access-lf4q9" (OuterVolumeSpecName: "kube-api-access-lf4q9") pod "ec70975e-67f8-46e9-9c01-3d1050806e82" (UID: "ec70975e-67f8-46e9-9c01-3d1050806e82"). InnerVolumeSpecName "kube-api-access-lf4q9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:12:04 crc kubenswrapper[5131]: I0107 10:12:04.802184 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lf4q9\" (UniqueName: \"kubernetes.io/projected/ec70975e-67f8-46e9-9c01-3d1050806e82-kube-api-access-lf4q9\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:05 crc kubenswrapper[5131]: I0107 10:12:05.281284 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463006-ks8hg"] Jan 07 10:12:05 crc kubenswrapper[5131]: I0107 10:12:05.287747 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463006-ks8hg"] Jan 07 10:12:05 crc kubenswrapper[5131]: I0107 10:12:05.306456 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463012-26ldf" Jan 07 10:12:05 crc kubenswrapper[5131]: I0107 10:12:05.306474 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463012-26ldf" event={"ID":"ec70975e-67f8-46e9-9c01-3d1050806e82","Type":"ContainerDied","Data":"ac04c567af0e0e4de8edc04288d80d3ef500857af5216682dea1eff96365121d"} Jan 07 10:12:05 crc kubenswrapper[5131]: I0107 10:12:05.306531 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac04c567af0e0e4de8edc04288d80d3ef500857af5216682dea1eff96365121d" Jan 07 10:12:06 crc kubenswrapper[5131]: I0107 10:12:06.194351 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3" path="/var/lib/kubelet/pods/7afc1fba-f972-4fd7-adc5-ce2af0f7f6c3/volumes" Jan 07 10:12:20 crc kubenswrapper[5131]: I0107 10:12:20.663345 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:12:20 crc kubenswrapper[5131]: I0107 10:12:20.664134 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:12:34 crc kubenswrapper[5131]: I0107 10:12:34.158906 5131 scope.go:117] "RemoveContainer" containerID="57f9209ac00616c2c1bc7ccfbad15dd8dcc0b0893e1acee47dc16602a94e3ab8" Jan 07 10:12:39 crc kubenswrapper[5131]: I0107 10:12:39.582800 5131 generic.go:358] "Generic (PLEG): container finished" podID="45641b95-68b2-45cb-aafb-64ab77f33a27" containerID="6c4b5bf102f8bf81cc562b32a644b2a0b5160b3568b3c88e3fb4a095491a822b" exitCode=0 Jan 07 10:12:39 crc kubenswrapper[5131]: I0107 10:12:39.582890 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"45641b95-68b2-45cb-aafb-64ab77f33a27","Type":"ContainerDied","Data":"6c4b5bf102f8bf81cc562b32a644b2a0b5160b3568b3c88e3fb4a095491a822b"} Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.890109 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968505 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-buildcachedir\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968582 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-root\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968621 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968627 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-push\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968671 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-run\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968706 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-ca-bundles\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968726 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-system-configs\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968745 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-buildworkdir\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968773 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-build-blob-cache\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968828 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-pull\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968885 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-proxy-ca-bundles\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968907 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stgq7\" (UniqueName: \"kubernetes.io/projected/45641b95-68b2-45cb-aafb-64ab77f33a27-kube-api-access-stgq7\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.968923 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-node-pullsecrets\") pod \"45641b95-68b2-45cb-aafb-64ab77f33a27\" (UID: \"45641b95-68b2-45cb-aafb-64ab77f33a27\") " Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.969169 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.969201 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.969793 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.970080 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.970073 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.970305 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.970817 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.974596 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.975984 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:12:40 crc kubenswrapper[5131]: I0107 10:12:40.976041 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45641b95-68b2-45cb-aafb-64ab77f33a27-kube-api-access-stgq7" (OuterVolumeSpecName: "kube-api-access-stgq7") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "kube-api-access-stgq7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.069859 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.069906 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.069919 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-stgq7\" (UniqueName: \"kubernetes.io/projected/45641b95-68b2-45cb-aafb-64ab77f33a27-kube-api-access-stgq7\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.069932 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45641b95-68b2-45cb-aafb-64ab77f33a27-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.069944 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/45641b95-68b2-45cb-aafb-64ab77f33a27-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.069958 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.069969 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.069979 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45641b95-68b2-45cb-aafb-64ab77f33a27-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.069990 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.138659 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.171330 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.606438 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.606430 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"45641b95-68b2-45cb-aafb-64ab77f33a27","Type":"ContainerDied","Data":"2604c7abf53fbdb1d6b755a8fa2ffff2049da5bab3211b358c282eb040f1ddea"} Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.606592 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2604c7abf53fbdb1d6b755a8fa2ffff2049da5bab3211b358c282eb040f1ddea" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.804713 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "45641b95-68b2-45cb-aafb-64ab77f33a27" (UID: "45641b95-68b2-45cb-aafb-64ab77f33a27"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:12:41 crc kubenswrapper[5131]: I0107 10:12:41.882931 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45641b95-68b2-45cb-aafb-64ab77f33a27-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.733496 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.734883 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45641b95-68b2-45cb-aafb-64ab77f33a27" containerName="git-clone" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.734907 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="45641b95-68b2-45cb-aafb-64ab77f33a27" containerName="git-clone" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.734932 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45641b95-68b2-45cb-aafb-64ab77f33a27" containerName="manage-dockerfile" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.734944 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="45641b95-68b2-45cb-aafb-64ab77f33a27" containerName="manage-dockerfile" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.734973 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec70975e-67f8-46e9-9c01-3d1050806e82" containerName="oc" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.734983 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec70975e-67f8-46e9-9c01-3d1050806e82" containerName="oc" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.735024 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45641b95-68b2-45cb-aafb-64ab77f33a27" containerName="docker-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.735036 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="45641b95-68b2-45cb-aafb-64ab77f33a27" containerName="docker-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.735201 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec70975e-67f8-46e9-9c01-3d1050806e82" containerName="oc" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.735221 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="45641b95-68b2-45cb-aafb-64ab77f33a27" containerName="docker-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.805140 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.805325 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.808124 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-ca\"" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.808236 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-sys-config\"" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.808301 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-global-ca\"" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.810941 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.855968 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.856041 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.856111 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.856143 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.856176 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.856253 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss4vf\" (UniqueName: \"kubernetes.io/projected/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-kube-api-access-ss4vf\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.856286 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.856352 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.856396 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.856425 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.856489 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.856539 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.957638 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.957690 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.957729 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.957768 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.957793 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.957816 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.957934 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.958079 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.958209 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ss4vf\" (UniqueName: \"kubernetes.io/projected/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-kube-api-access-ss4vf\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.958279 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.958349 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.958424 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.958463 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.959124 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.959490 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.959951 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.960186 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.960233 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.960369 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.960657 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.960736 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.969650 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.972529 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:46 crc kubenswrapper[5131]: I0107 10:12:46.995425 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss4vf\" (UniqueName: \"kubernetes.io/projected/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-kube-api-access-ss4vf\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:47 crc kubenswrapper[5131]: I0107 10:12:47.129280 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:47 crc kubenswrapper[5131]: I0107 10:12:47.485631 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 07 10:12:47 crc kubenswrapper[5131]: I0107 10:12:47.652427 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"cc1891ac-f2b0-4964-82ae-81d7c3b4140b","Type":"ContainerStarted","Data":"8fced41596119aff270ff66000d31cd34a044552209bdf493d0b65af743c94ff"} Jan 07 10:12:48 crc kubenswrapper[5131]: I0107 10:12:48.662076 5131 generic.go:358] "Generic (PLEG): container finished" podID="cc1891ac-f2b0-4964-82ae-81d7c3b4140b" containerID="ca1380b72f42ac574699be952f4369a568a90ad646cb7d57d325e00fb9404c7a" exitCode=0 Jan 07 10:12:48 crc kubenswrapper[5131]: I0107 10:12:48.662168 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"cc1891ac-f2b0-4964-82ae-81d7c3b4140b","Type":"ContainerDied","Data":"ca1380b72f42ac574699be952f4369a568a90ad646cb7d57d325e00fb9404c7a"} Jan 07 10:12:49 crc kubenswrapper[5131]: I0107 10:12:49.672755 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"cc1891ac-f2b0-4964-82ae-81d7c3b4140b","Type":"ContainerStarted","Data":"a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568"} Jan 07 10:12:49 crc kubenswrapper[5131]: I0107 10:12:49.720728 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.72070735 podStartE2EDuration="3.72070735s" podCreationTimestamp="2026-01-07 10:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:12:49.711336899 +0000 UTC m=+1397.877638483" watchObservedRunningTime="2026-01-07 10:12:49.72070735 +0000 UTC m=+1397.887008924" Jan 07 10:12:50 crc kubenswrapper[5131]: I0107 10:12:50.663807 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:12:50 crc kubenswrapper[5131]: I0107 10:12:50.663963 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.125357 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.126529 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="cc1891ac-f2b0-4964-82ae-81d7c3b4140b" containerName="docker-build" containerID="cri-o://a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568" gracePeriod=30 Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.709075 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_cc1891ac-f2b0-4964-82ae-81d7c3b4140b/docker-build/0.log" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.710132 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.740929 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_cc1891ac-f2b0-4964-82ae-81d7c3b4140b/docker-build/0.log" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.743337 5131 generic.go:358] "Generic (PLEG): container finished" podID="cc1891ac-f2b0-4964-82ae-81d7c3b4140b" containerID="a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568" exitCode=1 Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.743475 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.743531 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"cc1891ac-f2b0-4964-82ae-81d7c3b4140b","Type":"ContainerDied","Data":"a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568"} Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.743597 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"cc1891ac-f2b0-4964-82ae-81d7c3b4140b","Type":"ContainerDied","Data":"8fced41596119aff270ff66000d31cd34a044552209bdf493d0b65af743c94ff"} Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.743637 5131 scope.go:117] "RemoveContainer" containerID="a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.782345 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildworkdir\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.782462 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-push\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.782518 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-proxy-ca-bundles\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.782557 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-ca-bundles\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.782805 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-root\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.783326 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.788076 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildcachedir\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.788174 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-node-pullsecrets\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.783502 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.784248 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.788228 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss4vf\" (UniqueName: \"kubernetes.io/projected/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-kube-api-access-ss4vf\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.788365 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-pull\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.788406 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-run\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.788466 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-system-configs\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.788551 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-blob-cache\") pod \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\" (UID: \"cc1891ac-f2b0-4964-82ae-81d7c3b4140b\") " Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.789224 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.789262 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.789283 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.790142 5131 scope.go:117] "RemoveContainer" containerID="ca1380b72f42ac574699be952f4369a568a90ad646cb7d57d325e00fb9404c7a" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.788117 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.788260 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.792096 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.793251 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.794024 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.795110 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-kube-api-access-ss4vf" (OuterVolumeSpecName: "kube-api-access-ss4vf") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "kube-api-access-ss4vf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.795169 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.842959 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.890227 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.890258 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.890266 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.890276 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ss4vf\" (UniqueName: \"kubernetes.io/projected/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-kube-api-access-ss4vf\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.890286 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.890294 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.890302 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.890312 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.925494 5131 scope.go:117] "RemoveContainer" containerID="a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568" Jan 07 10:12:57 crc kubenswrapper[5131]: E0107 10:12:57.925805 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568\": container with ID starting with a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568 not found: ID does not exist" containerID="a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.925868 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568"} err="failed to get container status \"a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568\": rpc error: code = NotFound desc = could not find container \"a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568\": container with ID starting with a03ace675ffa064328d0c2373b3e4adabea64953b52c016c93496c7af2830568 not found: ID does not exist" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.925896 5131 scope.go:117] "RemoveContainer" containerID="ca1380b72f42ac574699be952f4369a568a90ad646cb7d57d325e00fb9404c7a" Jan 07 10:12:57 crc kubenswrapper[5131]: E0107 10:12:57.926108 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca1380b72f42ac574699be952f4369a568a90ad646cb7d57d325e00fb9404c7a\": container with ID starting with ca1380b72f42ac574699be952f4369a568a90ad646cb7d57d325e00fb9404c7a not found: ID does not exist" containerID="ca1380b72f42ac574699be952f4369a568a90ad646cb7d57d325e00fb9404c7a" Jan 07 10:12:57 crc kubenswrapper[5131]: I0107 10:12:57.926129 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1380b72f42ac574699be952f4369a568a90ad646cb7d57d325e00fb9404c7a"} err="failed to get container status \"ca1380b72f42ac574699be952f4369a568a90ad646cb7d57d325e00fb9404c7a\": rpc error: code = NotFound desc = could not find container \"ca1380b72f42ac574699be952f4369a568a90ad646cb7d57d325e00fb9404c7a\": container with ID starting with ca1380b72f42ac574699be952f4369a568a90ad646cb7d57d325e00fb9404c7a not found: ID does not exist" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.153009 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cc1891ac-f2b0-4964-82ae-81d7c3b4140b" (UID: "cc1891ac-f2b0-4964-82ae-81d7c3b4140b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.194976 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc1891ac-f2b0-4964-82ae-81d7c3b4140b-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.381620 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.388524 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.767937 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.768986 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cc1891ac-f2b0-4964-82ae-81d7c3b4140b" containerName="docker-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.769112 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1891ac-f2b0-4964-82ae-81d7c3b4140b" containerName="docker-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.769215 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cc1891ac-f2b0-4964-82ae-81d7c3b4140b" containerName="manage-dockerfile" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.769296 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1891ac-f2b0-4964-82ae-81d7c3b4140b" containerName="manage-dockerfile" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.769491 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="cc1891ac-f2b0-4964-82ae-81d7c3b4140b" containerName="docker-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.802117 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.802318 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.805291 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-sys-config\"" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.805721 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-ca\"" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.806108 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-global-ca\"" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.807278 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.904412 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.904463 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.904495 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.904584 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.904639 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.904714 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.904808 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.904858 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.904930 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.904991 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.905052 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:58 crc kubenswrapper[5131]: I0107 10:12:58.905125 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rqg\" (UniqueName: \"kubernetes.io/projected/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-kube-api-access-x6rqg\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006163 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006260 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006362 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x6rqg\" (UniqueName: \"kubernetes.io/projected/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-kube-api-access-x6rqg\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006423 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006457 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006538 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006590 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006642 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006707 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006779 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006852 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.006952 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.007006 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.007082 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.007592 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.007650 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.008285 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.008404 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.008446 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.008688 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.008752 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.014414 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.017203 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.043876 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6rqg\" (UniqueName: \"kubernetes.io/projected/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-kube-api-access-x6rqg\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.155190 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.448168 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 07 10:12:59 crc kubenswrapper[5131]: I0107 10:12:59.770512 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b","Type":"ContainerStarted","Data":"8be772e2f2a9b61448f92f2a2d1fc159ad3ced958d7c14222a2ef058d99a8f9c"} Jan 07 10:13:00 crc kubenswrapper[5131]: I0107 10:13:00.193087 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc1891ac-f2b0-4964-82ae-81d7c3b4140b" path="/var/lib/kubelet/pods/cc1891ac-f2b0-4964-82ae-81d7c3b4140b/volumes" Jan 07 10:13:00 crc kubenswrapper[5131]: I0107 10:13:00.782212 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b","Type":"ContainerStarted","Data":"09823d7a989b83e5b8eada43f3af1af73ede62ffa3596610e8798874bde439ef"} Jan 07 10:13:01 crc kubenswrapper[5131]: I0107 10:13:01.794554 5131 generic.go:358] "Generic (PLEG): container finished" podID="d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" containerID="09823d7a989b83e5b8eada43f3af1af73ede62ffa3596610e8798874bde439ef" exitCode=0 Jan 07 10:13:01 crc kubenswrapper[5131]: I0107 10:13:01.794711 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b","Type":"ContainerDied","Data":"09823d7a989b83e5b8eada43f3af1af73ede62ffa3596610e8798874bde439ef"} Jan 07 10:13:02 crc kubenswrapper[5131]: I0107 10:13:02.816560 5131 generic.go:358] "Generic (PLEG): container finished" podID="d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" containerID="886d3aa4a1f385db7594233c8809326ec0c16ec653f0ccc4262ae417cf2324bd" exitCode=0 Jan 07 10:13:02 crc kubenswrapper[5131]: I0107 10:13:02.816617 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b","Type":"ContainerDied","Data":"886d3aa4a1f385db7594233c8809326ec0c16ec653f0ccc4262ae417cf2324bd"} Jan 07 10:13:02 crc kubenswrapper[5131]: I0107 10:13:02.856245 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_d5a5d36e-d1fb-4c47-ae7b-4970cca1712b/manage-dockerfile/0.log" Jan 07 10:13:03 crc kubenswrapper[5131]: I0107 10:13:03.826574 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b","Type":"ContainerStarted","Data":"1eb00149469cd90898ee0868fb3ab990c0d3cdc5bcc26895b81a11988e242e3e"} Jan 07 10:13:03 crc kubenswrapper[5131]: I0107 10:13:03.870091 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.870012714 podStartE2EDuration="5.870012714s" podCreationTimestamp="2026-01-07 10:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:13:03.862894108 +0000 UTC m=+1412.029195682" watchObservedRunningTime="2026-01-07 10:13:03.870012714 +0000 UTC m=+1412.036314328" Jan 07 10:13:20 crc kubenswrapper[5131]: I0107 10:13:20.663657 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:13:20 crc kubenswrapper[5131]: I0107 10:13:20.664564 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:13:20 crc kubenswrapper[5131]: I0107 10:13:20.664662 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 10:13:20 crc kubenswrapper[5131]: I0107 10:13:20.665760 5131 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13b258610a3045e67e9e5de274b918c4da88f0376e2747328b51f3ef9deaf0e0"} pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 10:13:20 crc kubenswrapper[5131]: I0107 10:13:20.665913 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" containerID="cri-o://13b258610a3045e67e9e5de274b918c4da88f0376e2747328b51f3ef9deaf0e0" gracePeriod=600 Jan 07 10:13:20 crc kubenswrapper[5131]: I0107 10:13:20.984271 5131 generic.go:358] "Generic (PLEG): container finished" podID="3942e752-44ba-4678-8723-6cd778e60d73" containerID="13b258610a3045e67e9e5de274b918c4da88f0376e2747328b51f3ef9deaf0e0" exitCode=0 Jan 07 10:13:20 crc kubenswrapper[5131]: I0107 10:13:20.984387 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerDied","Data":"13b258610a3045e67e9e5de274b918c4da88f0376e2747328b51f3ef9deaf0e0"} Jan 07 10:13:20 crc kubenswrapper[5131]: I0107 10:13:20.984819 5131 scope.go:117] "RemoveContainer" containerID="6a3199d24d9f75069e3d6ef18dc98d686384b2b6b4a377d2ed0dde963838ac1e" Jan 07 10:13:21 crc kubenswrapper[5131]: I0107 10:13:21.996994 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25"} Jan 07 10:14:00 crc kubenswrapper[5131]: I0107 10:14:00.143661 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463014-svd7q"] Jan 07 10:14:00 crc kubenswrapper[5131]: I0107 10:14:00.189980 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463014-svd7q"] Jan 07 10:14:00 crc kubenswrapper[5131]: I0107 10:14:00.192063 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463014-svd7q" Jan 07 10:14:00 crc kubenswrapper[5131]: I0107 10:14:00.194696 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:14:00 crc kubenswrapper[5131]: I0107 10:14:00.196537 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:14:00 crc kubenswrapper[5131]: I0107 10:14:00.196919 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:14:00 crc kubenswrapper[5131]: I0107 10:14:00.315402 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrltw\" (UniqueName: \"kubernetes.io/projected/e7d21fe3-ed1a-4c84-932c-be16c225cf34-kube-api-access-mrltw\") pod \"auto-csr-approver-29463014-svd7q\" (UID: \"e7d21fe3-ed1a-4c84-932c-be16c225cf34\") " pod="openshift-infra/auto-csr-approver-29463014-svd7q" Jan 07 10:14:00 crc kubenswrapper[5131]: I0107 10:14:00.417619 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mrltw\" (UniqueName: \"kubernetes.io/projected/e7d21fe3-ed1a-4c84-932c-be16c225cf34-kube-api-access-mrltw\") pod \"auto-csr-approver-29463014-svd7q\" (UID: \"e7d21fe3-ed1a-4c84-932c-be16c225cf34\") " pod="openshift-infra/auto-csr-approver-29463014-svd7q" Jan 07 10:14:00 crc kubenswrapper[5131]: I0107 10:14:00.442333 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrltw\" (UniqueName: \"kubernetes.io/projected/e7d21fe3-ed1a-4c84-932c-be16c225cf34-kube-api-access-mrltw\") pod \"auto-csr-approver-29463014-svd7q\" (UID: \"e7d21fe3-ed1a-4c84-932c-be16c225cf34\") " pod="openshift-infra/auto-csr-approver-29463014-svd7q" Jan 07 10:14:00 crc kubenswrapper[5131]: I0107 10:14:00.517240 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463014-svd7q" Jan 07 10:14:00 crc kubenswrapper[5131]: I0107 10:14:00.931544 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463014-svd7q"] Jan 07 10:14:00 crc kubenswrapper[5131]: W0107 10:14:00.942237 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7d21fe3_ed1a_4c84_932c_be16c225cf34.slice/crio-2d7f716b3cd43a73479d046d162e989bb984cb9a9cbbb4aec42bfe7ba46d9e5c WatchSource:0}: Error finding container 2d7f716b3cd43a73479d046d162e989bb984cb9a9cbbb4aec42bfe7ba46d9e5c: Status 404 returned error can't find the container with id 2d7f716b3cd43a73479d046d162e989bb984cb9a9cbbb4aec42bfe7ba46d9e5c Jan 07 10:14:01 crc kubenswrapper[5131]: I0107 10:14:01.372608 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463014-svd7q" event={"ID":"e7d21fe3-ed1a-4c84-932c-be16c225cf34","Type":"ContainerStarted","Data":"2d7f716b3cd43a73479d046d162e989bb984cb9a9cbbb4aec42bfe7ba46d9e5c"} Jan 07 10:14:03 crc kubenswrapper[5131]: I0107 10:14:03.393981 5131 generic.go:358] "Generic (PLEG): container finished" podID="e7d21fe3-ed1a-4c84-932c-be16c225cf34" containerID="4547a702ed8b5fcf176a6a074d80a7fd1cf1266a1e1bb50b5c8fa8a5d1a80210" exitCode=0 Jan 07 10:14:03 crc kubenswrapper[5131]: I0107 10:14:03.394074 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463014-svd7q" event={"ID":"e7d21fe3-ed1a-4c84-932c-be16c225cf34","Type":"ContainerDied","Data":"4547a702ed8b5fcf176a6a074d80a7fd1cf1266a1e1bb50b5c8fa8a5d1a80210"} Jan 07 10:14:04 crc kubenswrapper[5131]: I0107 10:14:04.636379 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463014-svd7q" Jan 07 10:14:04 crc kubenswrapper[5131]: I0107 10:14:04.795030 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrltw\" (UniqueName: \"kubernetes.io/projected/e7d21fe3-ed1a-4c84-932c-be16c225cf34-kube-api-access-mrltw\") pod \"e7d21fe3-ed1a-4c84-932c-be16c225cf34\" (UID: \"e7d21fe3-ed1a-4c84-932c-be16c225cf34\") " Jan 07 10:14:04 crc kubenswrapper[5131]: I0107 10:14:04.807043 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d21fe3-ed1a-4c84-932c-be16c225cf34-kube-api-access-mrltw" (OuterVolumeSpecName: "kube-api-access-mrltw") pod "e7d21fe3-ed1a-4c84-932c-be16c225cf34" (UID: "e7d21fe3-ed1a-4c84-932c-be16c225cf34"). InnerVolumeSpecName "kube-api-access-mrltw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:14:04 crc kubenswrapper[5131]: I0107 10:14:04.896915 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mrltw\" (UniqueName: \"kubernetes.io/projected/e7d21fe3-ed1a-4c84-932c-be16c225cf34-kube-api-access-mrltw\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.410083 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463014-svd7q" event={"ID":"e7d21fe3-ed1a-4c84-932c-be16c225cf34","Type":"ContainerDied","Data":"2d7f716b3cd43a73479d046d162e989bb984cb9a9cbbb4aec42bfe7ba46d9e5c"} Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.410548 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d7f716b3cd43a73479d046d162e989bb984cb9a9cbbb4aec42bfe7ba46d9e5c" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.410111 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463014-svd7q" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.564469 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hzcn5"] Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.565153 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7d21fe3-ed1a-4c84-932c-be16c225cf34" containerName="oc" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.565166 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d21fe3-ed1a-4c84-932c-be16c225cf34" containerName="oc" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.565277 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="e7d21fe3-ed1a-4c84-932c-be16c225cf34" containerName="oc" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.569575 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.570404 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hzcn5"] Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.698947 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463008-qzfcq"] Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.704515 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463008-qzfcq"] Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.708576 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-catalog-content\") pod \"community-operators-hzcn5\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.708649 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmhjn\" (UniqueName: \"kubernetes.io/projected/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-kube-api-access-dmhjn\") pod \"community-operators-hzcn5\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.708710 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-utilities\") pod \"community-operators-hzcn5\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.822730 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-catalog-content\") pod \"community-operators-hzcn5\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.822821 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dmhjn\" (UniqueName: \"kubernetes.io/projected/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-kube-api-access-dmhjn\") pod \"community-operators-hzcn5\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.822910 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-utilities\") pod \"community-operators-hzcn5\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.823538 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-utilities\") pod \"community-operators-hzcn5\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.825218 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-catalog-content\") pod \"community-operators-hzcn5\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.839159 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmhjn\" (UniqueName: \"kubernetes.io/projected/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-kube-api-access-dmhjn\") pod \"community-operators-hzcn5\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:05 crc kubenswrapper[5131]: I0107 10:14:05.888222 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:06 crc kubenswrapper[5131]: I0107 10:14:06.189490 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0cfe355-971c-4d53-99ab-77e026860934" path="/var/lib/kubelet/pods/c0cfe355-971c-4d53-99ab-77e026860934/volumes" Jan 07 10:14:06 crc kubenswrapper[5131]: I0107 10:14:06.400516 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hzcn5"] Jan 07 10:14:07 crc kubenswrapper[5131]: I0107 10:14:07.429984 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzcn5" event={"ID":"aaef4e55-4b3b-4c8d-81cc-935dd5e45811","Type":"ContainerStarted","Data":"35d3bc09c54b734a1ff6c4b4e4a2de2c1807a701c275caf28d58fcd395b49e82"} Jan 07 10:14:08 crc kubenswrapper[5131]: I0107 10:14:08.440617 5131 generic.go:358] "Generic (PLEG): container finished" podID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" containerID="44d6695a4496132f5208233a1a0732ed87738b3c75e359c47c4dd43d92a34112" exitCode=0 Jan 07 10:14:08 crc kubenswrapper[5131]: I0107 10:14:08.440679 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzcn5" event={"ID":"aaef4e55-4b3b-4c8d-81cc-935dd5e45811","Type":"ContainerDied","Data":"44d6695a4496132f5208233a1a0732ed87738b3c75e359c47c4dd43d92a34112"} Jan 07 10:14:11 crc kubenswrapper[5131]: I0107 10:14:11.467934 5131 generic.go:358] "Generic (PLEG): container finished" podID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" containerID="95979c2da1d09a591fff88c734688d382bcb78d236c9550eb18f93f3d4cf32de" exitCode=0 Jan 07 10:14:11 crc kubenswrapper[5131]: I0107 10:14:11.468125 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzcn5" event={"ID":"aaef4e55-4b3b-4c8d-81cc-935dd5e45811","Type":"ContainerDied","Data":"95979c2da1d09a591fff88c734688d382bcb78d236c9550eb18f93f3d4cf32de"} Jan 07 10:14:12 crc kubenswrapper[5131]: I0107 10:14:12.478900 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzcn5" event={"ID":"aaef4e55-4b3b-4c8d-81cc-935dd5e45811","Type":"ContainerStarted","Data":"05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b"} Jan 07 10:14:12 crc kubenswrapper[5131]: I0107 10:14:12.514499 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hzcn5" podStartSLOduration=5.340872368 podStartE2EDuration="7.514469768s" podCreationTimestamp="2026-01-07 10:14:05 +0000 UTC" firstStartedPulling="2026-01-07 10:14:08.441958301 +0000 UTC m=+1476.608259905" lastFinishedPulling="2026-01-07 10:14:10.615555731 +0000 UTC m=+1478.781857305" observedRunningTime="2026-01-07 10:14:12.499350684 +0000 UTC m=+1480.665652268" watchObservedRunningTime="2026-01-07 10:14:12.514469768 +0000 UTC m=+1480.680771362" Jan 07 10:14:15 crc kubenswrapper[5131]: I0107 10:14:15.889181 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:15 crc kubenswrapper[5131]: I0107 10:14:15.889575 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:15 crc kubenswrapper[5131]: I0107 10:14:15.946606 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:26 crc kubenswrapper[5131]: I0107 10:14:26.578063 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:26 crc kubenswrapper[5131]: I0107 10:14:26.641438 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hzcn5"] Jan 07 10:14:26 crc kubenswrapper[5131]: I0107 10:14:26.641764 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hzcn5" podUID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" containerName="registry-server" containerID="cri-o://05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b" gracePeriod=2 Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.032269 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.070760 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-catalog-content\") pod \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.070885 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-utilities\") pod \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.070927 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmhjn\" (UniqueName: \"kubernetes.io/projected/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-kube-api-access-dmhjn\") pod \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\" (UID: \"aaef4e55-4b3b-4c8d-81cc-935dd5e45811\") " Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.072424 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-utilities" (OuterVolumeSpecName: "utilities") pod "aaef4e55-4b3b-4c8d-81cc-935dd5e45811" (UID: "aaef4e55-4b3b-4c8d-81cc-935dd5e45811"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.078655 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-kube-api-access-dmhjn" (OuterVolumeSpecName: "kube-api-access-dmhjn") pod "aaef4e55-4b3b-4c8d-81cc-935dd5e45811" (UID: "aaef4e55-4b3b-4c8d-81cc-935dd5e45811"). InnerVolumeSpecName "kube-api-access-dmhjn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.123387 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaef4e55-4b3b-4c8d-81cc-935dd5e45811" (UID: "aaef4e55-4b3b-4c8d-81cc-935dd5e45811"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.172759 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.172865 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmhjn\" (UniqueName: \"kubernetes.io/projected/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-kube-api-access-dmhjn\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.172889 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaef4e55-4b3b-4c8d-81cc-935dd5e45811-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.612893 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzcn5" event={"ID":"aaef4e55-4b3b-4c8d-81cc-935dd5e45811","Type":"ContainerDied","Data":"05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b"} Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.613386 5131 scope.go:117] "RemoveContainer" containerID="05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.612931 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzcn5" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.612794 5131 generic.go:358] "Generic (PLEG): container finished" podID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" containerID="05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b" exitCode=0 Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.614620 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzcn5" event={"ID":"aaef4e55-4b3b-4c8d-81cc-935dd5e45811","Type":"ContainerDied","Data":"35d3bc09c54b734a1ff6c4b4e4a2de2c1807a701c275caf28d58fcd395b49e82"} Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.638240 5131 scope.go:117] "RemoveContainer" containerID="95979c2da1d09a591fff88c734688d382bcb78d236c9550eb18f93f3d4cf32de" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.669856 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hzcn5"] Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.674980 5131 scope.go:117] "RemoveContainer" containerID="44d6695a4496132f5208233a1a0732ed87738b3c75e359c47c4dd43d92a34112" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.677072 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hzcn5"] Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.711713 5131 scope.go:117] "RemoveContainer" containerID="05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b" Jan 07 10:14:27 crc kubenswrapper[5131]: E0107 10:14:27.712216 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b\": container with ID starting with 05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b not found: ID does not exist" containerID="05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.712264 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b"} err="failed to get container status \"05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b\": rpc error: code = NotFound desc = could not find container \"05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b\": container with ID starting with 05fcd352eebba6ab470c197b684d0a05bf38135973725637ba2722f9950ef05b not found: ID does not exist" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.712290 5131 scope.go:117] "RemoveContainer" containerID="95979c2da1d09a591fff88c734688d382bcb78d236c9550eb18f93f3d4cf32de" Jan 07 10:14:27 crc kubenswrapper[5131]: E0107 10:14:27.712702 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95979c2da1d09a591fff88c734688d382bcb78d236c9550eb18f93f3d4cf32de\": container with ID starting with 95979c2da1d09a591fff88c734688d382bcb78d236c9550eb18f93f3d4cf32de not found: ID does not exist" containerID="95979c2da1d09a591fff88c734688d382bcb78d236c9550eb18f93f3d4cf32de" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.712771 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95979c2da1d09a591fff88c734688d382bcb78d236c9550eb18f93f3d4cf32de"} err="failed to get container status \"95979c2da1d09a591fff88c734688d382bcb78d236c9550eb18f93f3d4cf32de\": rpc error: code = NotFound desc = could not find container \"95979c2da1d09a591fff88c734688d382bcb78d236c9550eb18f93f3d4cf32de\": container with ID starting with 95979c2da1d09a591fff88c734688d382bcb78d236c9550eb18f93f3d4cf32de not found: ID does not exist" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.712816 5131 scope.go:117] "RemoveContainer" containerID="44d6695a4496132f5208233a1a0732ed87738b3c75e359c47c4dd43d92a34112" Jan 07 10:14:27 crc kubenswrapper[5131]: E0107 10:14:27.713316 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44d6695a4496132f5208233a1a0732ed87738b3c75e359c47c4dd43d92a34112\": container with ID starting with 44d6695a4496132f5208233a1a0732ed87738b3c75e359c47c4dd43d92a34112 not found: ID does not exist" containerID="44d6695a4496132f5208233a1a0732ed87738b3c75e359c47c4dd43d92a34112" Jan 07 10:14:27 crc kubenswrapper[5131]: I0107 10:14:27.713354 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44d6695a4496132f5208233a1a0732ed87738b3c75e359c47c4dd43d92a34112"} err="failed to get container status \"44d6695a4496132f5208233a1a0732ed87738b3c75e359c47c4dd43d92a34112\": rpc error: code = NotFound desc = could not find container \"44d6695a4496132f5208233a1a0732ed87738b3c75e359c47c4dd43d92a34112\": container with ID starting with 44d6695a4496132f5208233a1a0732ed87738b3c75e359c47c4dd43d92a34112 not found: ID does not exist" Jan 07 10:14:28 crc kubenswrapper[5131]: I0107 10:14:28.193218 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" path="/var/lib/kubelet/pods/aaef4e55-4b3b-4c8d-81cc-935dd5e45811/volumes" Jan 07 10:14:32 crc kubenswrapper[5131]: I0107 10:14:32.840320 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:14:32 crc kubenswrapper[5131]: I0107 10:14:32.840672 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:14:32 crc kubenswrapper[5131]: I0107 10:14:32.850258 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 10:14:32 crc kubenswrapper[5131]: I0107 10:14:32.850400 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 10:14:34 crc kubenswrapper[5131]: I0107 10:14:34.344731 5131 scope.go:117] "RemoveContainer" containerID="1c9cce502c9f1f1da04380a3ddb3ee24d0aded9a623539e893df31f1217c2255" Jan 07 10:14:39 crc kubenswrapper[5131]: I0107 10:14:39.697883 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b","Type":"ContainerDied","Data":"1eb00149469cd90898ee0868fb3ab990c0d3cdc5bcc26895b81a11988e242e3e"} Jan 07 10:14:39 crc kubenswrapper[5131]: I0107 10:14:39.697923 5131 generic.go:358] "Generic (PLEG): container finished" podID="d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" containerID="1eb00149469cd90898ee0868fb3ab990c0d3cdc5bcc26895b81a11988e242e3e" exitCode=0 Jan 07 10:14:40 crc kubenswrapper[5131]: I0107 10:14:40.966295 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.073994 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildworkdir\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074110 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-pull\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074155 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6rqg\" (UniqueName: \"kubernetes.io/projected/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-kube-api-access-x6rqg\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074175 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-proxy-ca-bundles\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074202 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-blob-cache\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074220 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-root\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074251 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-system-configs\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074280 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildcachedir\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074329 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-push\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074371 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-ca-bundles\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074390 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-run\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074423 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-node-pullsecrets\") pod \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\" (UID: \"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b\") " Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074446 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074628 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.074655 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.075265 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.075291 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.075973 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.076162 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.076495 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.079740 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.079765 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.081390 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-kube-api-access-x6rqg" (OuterVolumeSpecName: "kube-api-access-x6rqg") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "kube-api-access-x6rqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.170027 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.176301 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x6rqg\" (UniqueName: \"kubernetes.io/projected/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-kube-api-access-x6rqg\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.176344 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.176359 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.176371 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.176383 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.176395 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.176406 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.176417 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.176429 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.176441 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.715349 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d5a5d36e-d1fb-4c47-ae7b-4970cca1712b","Type":"ContainerDied","Data":"8be772e2f2a9b61448f92f2a2d1fc159ad3ced958d7c14222a2ef058d99a8f9c"} Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.715432 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8be772e2f2a9b61448f92f2a2d1fc159ad3ced958d7c14222a2ef058d99a8f9c" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.715428 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.919103 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" (UID: "d5a5d36e-d1fb-4c47-ae7b-4970cca1712b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:14:41 crc kubenswrapper[5131]: I0107 10:14:41.989097 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d5a5d36e-d1fb-4c47-ae7b-4970cca1712b-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.464206 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467049 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" containerName="registry-server" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467086 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" containerName="registry-server" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467122 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" containerName="docker-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467135 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" containerName="docker-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467150 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" containerName="manage-dockerfile" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467165 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" containerName="manage-dockerfile" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467190 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" containerName="extract-content" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467202 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" containerName="extract-content" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467251 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" containerName="extract-utilities" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467263 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" containerName="extract-utilities" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467279 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" containerName="git-clone" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467290 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" containerName="git-clone" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467499 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="aaef4e55-4b3b-4c8d-81cc-935dd5e45811" containerName="registry-server" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.467518 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="d5a5d36e-d1fb-4c47-ae7b-4970cca1712b" containerName="docker-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.597245 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.597445 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.600271 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-sys-config\"" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.600400 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-ca\"" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.600888 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-global-ca\"" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.601427 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.721415 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.721510 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.721576 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.721680 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.721802 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.721924 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.722127 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.722179 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.722423 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.722516 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvk7\" (UniqueName: \"kubernetes.io/projected/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-kube-api-access-8zvk7\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.722580 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.722681 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.824374 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.824466 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.824519 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.824559 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.824605 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.824655 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.824694 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.824769 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.825039 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.825081 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.824778 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.825340 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.825515 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.825829 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.825919 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.825970 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.826115 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.826164 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvk7\" (UniqueName: \"kubernetes.io/projected/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-kube-api-access-8zvk7\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.826404 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.826528 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.826751 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.835349 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.835351 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.853359 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvk7\" (UniqueName: \"kubernetes.io/projected/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-kube-api-access-8zvk7\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:50 crc kubenswrapper[5131]: I0107 10:14:50.920151 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:51 crc kubenswrapper[5131]: I0107 10:14:51.173659 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 07 10:14:51 crc kubenswrapper[5131]: I0107 10:14:51.797234 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8","Type":"ContainerStarted","Data":"fd8329bf46910382506c509557b35ead2a5d49dee4840d00a1fe2fb618d26242"} Jan 07 10:14:52 crc kubenswrapper[5131]: I0107 10:14:52.807384 5131 generic.go:358] "Generic (PLEG): container finished" podID="8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" containerID="91f8f198a2120c06518d8805f2a2f86ab7263ea81cfb45db2238957c5edcb927" exitCode=0 Jan 07 10:14:52 crc kubenswrapper[5131]: I0107 10:14:52.807480 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8","Type":"ContainerDied","Data":"91f8f198a2120c06518d8805f2a2f86ab7263ea81cfb45db2238957c5edcb927"} Jan 07 10:14:53 crc kubenswrapper[5131]: I0107 10:14:53.818721 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8/docker-build/0.log" Jan 07 10:14:53 crc kubenswrapper[5131]: I0107 10:14:53.819592 5131 generic.go:358] "Generic (PLEG): container finished" podID="8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" containerID="300568a7b9eefd7c441678a4c57985bf9235cc34d6e428bf264b992bbe3dce51" exitCode=1 Jan 07 10:14:53 crc kubenswrapper[5131]: I0107 10:14:53.819642 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8","Type":"ContainerDied","Data":"300568a7b9eefd7c441678a4c57985bf9235cc34d6e428bf264b992bbe3dce51"} Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.136038 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8/docker-build/0.log" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.136905 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.294169 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-root\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.294260 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-blob-cache\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.294318 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-system-configs\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.294391 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-run\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.294518 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildworkdir\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.294640 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-pull\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.294740 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-proxy-ca-bundles\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.294806 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvk7\" (UniqueName: \"kubernetes.io/projected/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-kube-api-access-8zvk7\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.295086 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-node-pullsecrets\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.295129 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildcachedir\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.295132 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.295217 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-ca-bundles\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.295307 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-push\") pod \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\" (UID: \"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8\") " Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.295307 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.295345 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.296079 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.296128 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.296152 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.296285 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.296304 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.296345 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.296724 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.296802 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.297605 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.305914 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.306749 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-kube-api-access-8zvk7" (OuterVolumeSpecName: "kube-api-access-8zvk7") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "kube-api-access-8zvk7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.310249 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" (UID: "8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.397420 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.397464 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.397480 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.397492 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8zvk7\" (UniqueName: \"kubernetes.io/projected/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-kube-api-access-8zvk7\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.397503 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.397515 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.397526 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.397537 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.397548 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.843182 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8/docker-build/0.log" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.843984 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8","Type":"ContainerDied","Data":"fd8329bf46910382506c509557b35ead2a5d49dee4840d00a1fe2fb618d26242"} Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.844041 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd8329bf46910382506c509557b35ead2a5d49dee4840d00a1fe2fb618d26242" Jan 07 10:14:55 crc kubenswrapper[5131]: I0107 10:14:55.844047 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.148220 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq"] Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.150238 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" containerName="manage-dockerfile" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.150272 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" containerName="manage-dockerfile" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.150334 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" containerName="docker-build" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.150351 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" containerName="docker-build" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.150603 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" containerName="docker-build" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.290918 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.293340 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.293340 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.303521 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq"] Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.383812 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-config-volume\") pod \"collect-profiles-29463015-494rq\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.384017 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w86gd\" (UniqueName: \"kubernetes.io/projected/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-kube-api-access-w86gd\") pod \"collect-profiles-29463015-494rq\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.384258 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-secret-volume\") pod \"collect-profiles-29463015-494rq\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.486317 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-config-volume\") pod \"collect-profiles-29463015-494rq\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.486405 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w86gd\" (UniqueName: \"kubernetes.io/projected/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-kube-api-access-w86gd\") pod \"collect-profiles-29463015-494rq\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.486453 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-secret-volume\") pod \"collect-profiles-29463015-494rq\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.488221 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-config-volume\") pod \"collect-profiles-29463015-494rq\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.496336 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-secret-volume\") pod \"collect-profiles-29463015-494rq\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.510245 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w86gd\" (UniqueName: \"kubernetes.io/projected/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-kube-api-access-w86gd\") pod \"collect-profiles-29463015-494rq\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.616072 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.972083 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 07 10:15:00 crc kubenswrapper[5131]: I0107 10:15:00.976439 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 07 10:15:01 crc kubenswrapper[5131]: I0107 10:15:01.077795 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq"] Jan 07 10:15:01 crc kubenswrapper[5131]: I0107 10:15:01.917946 5131 generic.go:358] "Generic (PLEG): container finished" podID="a4227fe0-2ef1-40e6-954c-eb8d3dd11db6" containerID="dd0359f1aae5f221afa35c3b04ff96763dc727978294168ff58c2361b22a89d1" exitCode=0 Jan 07 10:15:01 crc kubenswrapper[5131]: I0107 10:15:01.918068 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" event={"ID":"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6","Type":"ContainerDied","Data":"dd0359f1aae5f221afa35c3b04ff96763dc727978294168ff58c2361b22a89d1"} Jan 07 10:15:01 crc kubenswrapper[5131]: I0107 10:15:01.918544 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" event={"ID":"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6","Type":"ContainerStarted","Data":"c7384d58d9fadcde26023833f43b815a3f570cb3f1b81cc95a775ec42f4d64f9"} Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.191453 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8" path="/var/lib/kubelet/pods/8f9c7fc0-3969-4ec4-bb96-375d8aecf0a8/volumes" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.645540 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.652966 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.656442 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-ca\"" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.656705 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-sys-config\"" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.657192 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.657609 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-global-ca\"" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.675942 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.718473 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.718552 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.718581 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.718611 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.718640 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.721491 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.721761 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.722054 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.722237 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bnhr\" (UniqueName: \"kubernetes.io/projected/2e9d8a56-496e-4486-bff2-cf23c11d843f-kube-api-access-8bnhr\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.722431 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.722942 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.723151 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.825324 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.825874 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.825993 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.826086 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8bnhr\" (UniqueName: \"kubernetes.io/projected/2e9d8a56-496e-4486-bff2-cf23c11d843f-kube-api-access-8bnhr\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.826173 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.826269 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.826390 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.826519 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.826606 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.826655 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.826613 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.826686 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.826888 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.827004 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.827127 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.827255 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.827362 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.827502 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.827637 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.827698 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.828491 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.834146 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.838762 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:02 crc kubenswrapper[5131]: I0107 10:15:02.843128 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bnhr\" (UniqueName: \"kubernetes.io/projected/2e9d8a56-496e-4486-bff2-cf23c11d843f-kube-api-access-8bnhr\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.033670 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.139127 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.236659 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-secret-volume\") pod \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.236770 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w86gd\" (UniqueName: \"kubernetes.io/projected/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-kube-api-access-w86gd\") pod \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.236799 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-config-volume\") pod \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\" (UID: \"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6\") " Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.237407 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4227fe0-2ef1-40e6-954c-eb8d3dd11db6" (UID: "a4227fe0-2ef1-40e6-954c-eb8d3dd11db6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.237713 5131 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.241315 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-kube-api-access-w86gd" (OuterVolumeSpecName: "kube-api-access-w86gd") pod "a4227fe0-2ef1-40e6-954c-eb8d3dd11db6" (UID: "a4227fe0-2ef1-40e6-954c-eb8d3dd11db6"). InnerVolumeSpecName "kube-api-access-w86gd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.241500 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4227fe0-2ef1-40e6-954c-eb8d3dd11db6" (UID: "a4227fe0-2ef1-40e6-954c-eb8d3dd11db6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.270051 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 07 10:15:03 crc kubenswrapper[5131]: W0107 10:15:03.270147 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e9d8a56_496e_4486_bff2_cf23c11d843f.slice/crio-2a81a3d97cae09fc1c11c2437c1e4c9b49c9b634d93010ab2e2d8e0061fe494c WatchSource:0}: Error finding container 2a81a3d97cae09fc1c11c2437c1e4c9b49c9b634d93010ab2e2d8e0061fe494c: Status 404 returned error can't find the container with id 2a81a3d97cae09fc1c11c2437c1e4c9b49c9b634d93010ab2e2d8e0061fe494c Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.339358 5131 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.339406 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w86gd\" (UniqueName: \"kubernetes.io/projected/a4227fe0-2ef1-40e6-954c-eb8d3dd11db6-kube-api-access-w86gd\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.937231 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"2e9d8a56-496e-4486-bff2-cf23c11d843f","Type":"ContainerStarted","Data":"1347d500821d158a83580179fa793fa57da96dfc8b8842cc8d8731cf4ae60037"} Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.938366 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"2e9d8a56-496e-4486-bff2-cf23c11d843f","Type":"ContainerStarted","Data":"2a81a3d97cae09fc1c11c2437c1e4c9b49c9b634d93010ab2e2d8e0061fe494c"} Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.942142 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.942172 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29463015-494rq" event={"ID":"a4227fe0-2ef1-40e6-954c-eb8d3dd11db6","Type":"ContainerDied","Data":"c7384d58d9fadcde26023833f43b815a3f570cb3f1b81cc95a775ec42f4d64f9"} Jan 07 10:15:03 crc kubenswrapper[5131]: I0107 10:15:03.942253 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7384d58d9fadcde26023833f43b815a3f570cb3f1b81cc95a775ec42f4d64f9" Jan 07 10:15:04 crc kubenswrapper[5131]: E0107 10:15:04.096658 5131 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.220:52704->38.102.83.220:39377: write tcp 38.102.83.220:52704->38.102.83.220:39377: write: broken pipe Jan 07 10:15:04 crc kubenswrapper[5131]: I0107 10:15:04.950084 5131 generic.go:358] "Generic (PLEG): container finished" podID="2e9d8a56-496e-4486-bff2-cf23c11d843f" containerID="1347d500821d158a83580179fa793fa57da96dfc8b8842cc8d8731cf4ae60037" exitCode=0 Jan 07 10:15:04 crc kubenswrapper[5131]: I0107 10:15:04.950258 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"2e9d8a56-496e-4486-bff2-cf23c11d843f","Type":"ContainerDied","Data":"1347d500821d158a83580179fa793fa57da96dfc8b8842cc8d8731cf4ae60037"} Jan 07 10:15:05 crc kubenswrapper[5131]: I0107 10:15:05.963149 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"2e9d8a56-496e-4486-bff2-cf23c11d843f","Type":"ContainerDied","Data":"76798e917cfd92fdd6a2031efb132035f9b921307bd0bfb5ef4322efb9397691"} Jan 07 10:15:05 crc kubenswrapper[5131]: I0107 10:15:05.963188 5131 generic.go:358] "Generic (PLEG): container finished" podID="2e9d8a56-496e-4486-bff2-cf23c11d843f" containerID="76798e917cfd92fdd6a2031efb132035f9b921307bd0bfb5ef4322efb9397691" exitCode=0 Jan 07 10:15:06 crc kubenswrapper[5131]: I0107 10:15:06.002370 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_2e9d8a56-496e-4486-bff2-cf23c11d843f/manage-dockerfile/0.log" Jan 07 10:15:06 crc kubenswrapper[5131]: I0107 10:15:06.979856 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"2e9d8a56-496e-4486-bff2-cf23c11d843f","Type":"ContainerStarted","Data":"45d57c8dcdd417c6d36929ba0304a34838b07945ed3147bdefc6b78a84162836"} Jan 07 10:15:11 crc kubenswrapper[5131]: I0107 10:15:11.011201 5131 generic.go:358] "Generic (PLEG): container finished" podID="2e9d8a56-496e-4486-bff2-cf23c11d843f" containerID="45d57c8dcdd417c6d36929ba0304a34838b07945ed3147bdefc6b78a84162836" exitCode=0 Jan 07 10:15:11 crc kubenswrapper[5131]: I0107 10:15:11.011280 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"2e9d8a56-496e-4486-bff2-cf23c11d843f","Type":"ContainerDied","Data":"45d57c8dcdd417c6d36929ba0304a34838b07945ed3147bdefc6b78a84162836"} Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.303312 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381080 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-proxy-ca-bundles\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381131 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-root\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381168 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-run\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381223 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-blob-cache\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381252 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bnhr\" (UniqueName: \"kubernetes.io/projected/2e9d8a56-496e-4486-bff2-cf23c11d843f-kube-api-access-8bnhr\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381475 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-pull\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381512 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-system-configs\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381570 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildcachedir\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381624 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-ca-bundles\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381649 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildworkdir\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381699 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-node-pullsecrets\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381752 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381801 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.381952 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-push\") pod \"2e9d8a56-496e-4486-bff2-cf23c11d843f\" (UID: \"2e9d8a56-496e-4486-bff2-cf23c11d843f\") " Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.382592 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.382735 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.382765 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.382777 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e9d8a56-496e-4486-bff2-cf23c11d843f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.386464 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.387870 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.389029 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.389162 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.389285 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.392968 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.393677 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9d8a56-496e-4486-bff2-cf23c11d843f-kube-api-access-8bnhr" (OuterVolumeSpecName: "kube-api-access-8bnhr") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "kube-api-access-8bnhr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.395027 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.396057 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "2e9d8a56-496e-4486-bff2-cf23c11d843f" (UID: "2e9d8a56-496e-4486-bff2-cf23c11d843f"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.484499 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.484556 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.484574 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.484593 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.484613 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8bnhr\" (UniqueName: \"kubernetes.io/projected/2e9d8a56-496e-4486-bff2-cf23c11d843f-kube-api-access-8bnhr\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.484632 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/2e9d8a56-496e-4486-bff2-cf23c11d843f-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.484648 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.484665 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e9d8a56-496e-4486-bff2-cf23c11d843f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:12 crc kubenswrapper[5131]: I0107 10:15:12.484682 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e9d8a56-496e-4486-bff2-cf23c11d843f-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:13 crc kubenswrapper[5131]: I0107 10:15:13.031168 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"2e9d8a56-496e-4486-bff2-cf23c11d843f","Type":"ContainerDied","Data":"2a81a3d97cae09fc1c11c2437c1e4c9b49c9b634d93010ab2e2d8e0061fe494c"} Jan 07 10:15:13 crc kubenswrapper[5131]: I0107 10:15:13.031459 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a81a3d97cae09fc1c11c2437c1e4c9b49c9b634d93010ab2e2d8e0061fe494c" Jan 07 10:15:13 crc kubenswrapper[5131]: I0107 10:15:13.031210 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.514378 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.516243 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e9d8a56-496e-4486-bff2-cf23c11d843f" containerName="git-clone" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.516298 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9d8a56-496e-4486-bff2-cf23c11d843f" containerName="git-clone" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.516325 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4227fe0-2ef1-40e6-954c-eb8d3dd11db6" containerName="collect-profiles" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.516343 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4227fe0-2ef1-40e6-954c-eb8d3dd11db6" containerName="collect-profiles" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.516391 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e9d8a56-496e-4486-bff2-cf23c11d843f" containerName="manage-dockerfile" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.516408 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9d8a56-496e-4486-bff2-cf23c11d843f" containerName="manage-dockerfile" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.516433 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e9d8a56-496e-4486-bff2-cf23c11d843f" containerName="docker-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.516448 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9d8a56-496e-4486-bff2-cf23c11d843f" containerName="docker-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.516716 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e9d8a56-496e-4486-bff2-cf23c11d843f" containerName="docker-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.516753 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4227fe0-2ef1-40e6-954c-eb8d3dd11db6" containerName="collect-profiles" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.534133 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.534281 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.536608 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-ca\"" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.536719 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.536972 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-sys-config\"" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.539608 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-global-ca\"" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.646869 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.646949 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.646987 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.647018 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.647052 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.647084 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.647119 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.647151 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.647172 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.647192 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km67v\" (UniqueName: \"kubernetes.io/projected/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-kube-api-access-km67v\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.647223 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.647274 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.749082 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-km67v\" (UniqueName: \"kubernetes.io/projected/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-kube-api-access-km67v\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.749137 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.749188 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.749213 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.749662 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.749712 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.749748 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.749779 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.749988 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.750049 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.750101 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.750145 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.750184 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.750426 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.750473 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.750513 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.750490 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.750903 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.750911 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.751213 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.751384 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.755531 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.761499 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.782891 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-km67v\" (UniqueName: \"kubernetes.io/projected/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-kube-api-access-km67v\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:16 crc kubenswrapper[5131]: I0107 10:15:16.865135 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:17 crc kubenswrapper[5131]: I0107 10:15:17.145464 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 07 10:15:17 crc kubenswrapper[5131]: E0107 10:15:17.626569 5131 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02ab07cb_5f5d_47bb_93e7_84dc11d09c03.slice/crio-f947c8bf28d1cc3ce49bcf82348310d190cfe3032e31f9835b12da358ed040a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02ab07cb_5f5d_47bb_93e7_84dc11d09c03.slice/crio-conmon-f947c8bf28d1cc3ce49bcf82348310d190cfe3032e31f9835b12da358ed040a2.scope\": RecentStats: unable to find data in memory cache]" Jan 07 10:15:18 crc kubenswrapper[5131]: I0107 10:15:18.082480 5131 generic.go:358] "Generic (PLEG): container finished" podID="02ab07cb-5f5d-47bb-93e7-84dc11d09c03" containerID="f947c8bf28d1cc3ce49bcf82348310d190cfe3032e31f9835b12da358ed040a2" exitCode=0 Jan 07 10:15:18 crc kubenswrapper[5131]: I0107 10:15:18.082584 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"02ab07cb-5f5d-47bb-93e7-84dc11d09c03","Type":"ContainerDied","Data":"f947c8bf28d1cc3ce49bcf82348310d190cfe3032e31f9835b12da358ed040a2"} Jan 07 10:15:18 crc kubenswrapper[5131]: I0107 10:15:18.083167 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"02ab07cb-5f5d-47bb-93e7-84dc11d09c03","Type":"ContainerStarted","Data":"1f8e1da3818cff8189dadefc9659f878765d70fe5f897ec52f9fbea70a93401a"} Jan 07 10:15:19 crc kubenswrapper[5131]: I0107 10:15:19.094781 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_02ab07cb-5f5d-47bb-93e7-84dc11d09c03/docker-build/0.log" Jan 07 10:15:19 crc kubenswrapper[5131]: I0107 10:15:19.095655 5131 generic.go:358] "Generic (PLEG): container finished" podID="02ab07cb-5f5d-47bb-93e7-84dc11d09c03" containerID="d1a55cdf6d59607629d6c294e4387a398a77555a5528667926d35bc0d7bd6663" exitCode=1 Jan 07 10:15:19 crc kubenswrapper[5131]: I0107 10:15:19.095864 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"02ab07cb-5f5d-47bb-93e7-84dc11d09c03","Type":"ContainerDied","Data":"d1a55cdf6d59607629d6c294e4387a398a77555a5528667926d35bc0d7bd6663"} Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.441054 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_02ab07cb-5f5d-47bb-93e7-84dc11d09c03/docker-build/0.log" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.441615 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.508597 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildworkdir\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.509412 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-pull\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.509603 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km67v\" (UniqueName: \"kubernetes.io/projected/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-kube-api-access-km67v\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.509826 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-ca-bundles\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.510058 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-run\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.510211 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-system-configs\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.510433 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-root\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.512008 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-proxy-ca-bundles\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.509506 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.511005 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.511044 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.511209 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.512925 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.513245 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.513346 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-blob-cache\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.513487 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-node-pullsecrets\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.513564 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.513628 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-push\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.514074 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.514258 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildcachedir\") pod \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\" (UID: \"02ab07cb-5f5d-47bb-93e7-84dc11d09c03\") " Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.514326 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.515236 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.515272 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.515292 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.515312 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.515330 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.515346 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.515364 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.515381 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.515398 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.516378 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-kube-api-access-km67v" (OuterVolumeSpecName: "kube-api-access-km67v") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "kube-api-access-km67v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.516505 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.517230 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "02ab07cb-5f5d-47bb-93e7-84dc11d09c03" (UID: "02ab07cb-5f5d-47bb-93e7-84dc11d09c03"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.617154 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.617236 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:20 crc kubenswrapper[5131]: I0107 10:15:20.617263 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-km67v\" (UniqueName: \"kubernetes.io/projected/02ab07cb-5f5d-47bb-93e7-84dc11d09c03-kube-api-access-km67v\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:21 crc kubenswrapper[5131]: I0107 10:15:21.119272 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_02ab07cb-5f5d-47bb-93e7-84dc11d09c03/docker-build/0.log" Jan 07 10:15:21 crc kubenswrapper[5131]: I0107 10:15:21.120302 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 07 10:15:21 crc kubenswrapper[5131]: I0107 10:15:21.120317 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"02ab07cb-5f5d-47bb-93e7-84dc11d09c03","Type":"ContainerDied","Data":"1f8e1da3818cff8189dadefc9659f878765d70fe5f897ec52f9fbea70a93401a"} Jan 07 10:15:21 crc kubenswrapper[5131]: I0107 10:15:21.120403 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f8e1da3818cff8189dadefc9659f878765d70fe5f897ec52f9fbea70a93401a" Jan 07 10:15:27 crc kubenswrapper[5131]: I0107 10:15:27.086742 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 07 10:15:27 crc kubenswrapper[5131]: I0107 10:15:27.097293 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.194649 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02ab07cb-5f5d-47bb-93e7-84dc11d09c03" path="/var/lib/kubelet/pods/02ab07cb-5f5d-47bb-93e7-84dc11d09c03/volumes" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.654659 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.655931 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02ab07cb-5f5d-47bb-93e7-84dc11d09c03" containerName="manage-dockerfile" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.656061 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ab07cb-5f5d-47bb-93e7-84dc11d09c03" containerName="manage-dockerfile" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.656150 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02ab07cb-5f5d-47bb-93e7-84dc11d09c03" containerName="docker-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.656222 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ab07cb-5f5d-47bb-93e7-84dc11d09c03" containerName="docker-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.656440 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="02ab07cb-5f5d-47bb-93e7-84dc11d09c03" containerName="docker-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.661043 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.666063 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-sys-config\"" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.666075 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-global-ca\"" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.667081 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-ca\"" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.671315 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.686476 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743352 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743398 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743426 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743444 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743460 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743476 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743494 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743521 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743538 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743555 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743598 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.743626 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff8bf\" (UniqueName: \"kubernetes.io/projected/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-kube-api-access-ff8bf\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.845716 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.845826 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.845934 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.845997 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.846048 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.846098 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.846154 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.846242 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.847114 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.847471 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.847705 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.847722 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.847820 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.847963 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.848062 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.848069 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.848280 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.848287 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.848643 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.848729 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ff8bf\" (UniqueName: \"kubernetes.io/projected/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-kube-api-access-ff8bf\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.848902 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.856217 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.856726 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.875358 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff8bf\" (UniqueName: \"kubernetes.io/projected/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-kube-api-access-ff8bf\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:28 crc kubenswrapper[5131]: I0107 10:15:28.979742 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:29 crc kubenswrapper[5131]: I0107 10:15:29.198001 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 07 10:15:30 crc kubenswrapper[5131]: I0107 10:15:30.211236 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bd37b9c7-fbaf-4418-b7cd-fea504b855a1","Type":"ContainerStarted","Data":"7baaedfb561d18fd55552973aa9719403fa6d8b9bb7cafb457df3fb5d257c146"} Jan 07 10:15:30 crc kubenswrapper[5131]: I0107 10:15:30.211570 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bd37b9c7-fbaf-4418-b7cd-fea504b855a1","Type":"ContainerStarted","Data":"ac584c60ada6a852d16277ddfeb14bda1b9c0bbf640a68ad65b0607fd0f3245c"} Jan 07 10:15:31 crc kubenswrapper[5131]: I0107 10:15:31.223093 5131 generic.go:358] "Generic (PLEG): container finished" podID="bd37b9c7-fbaf-4418-b7cd-fea504b855a1" containerID="7baaedfb561d18fd55552973aa9719403fa6d8b9bb7cafb457df3fb5d257c146" exitCode=0 Jan 07 10:15:31 crc kubenswrapper[5131]: I0107 10:15:31.223161 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bd37b9c7-fbaf-4418-b7cd-fea504b855a1","Type":"ContainerDied","Data":"7baaedfb561d18fd55552973aa9719403fa6d8b9bb7cafb457df3fb5d257c146"} Jan 07 10:15:32 crc kubenswrapper[5131]: I0107 10:15:32.246175 5131 generic.go:358] "Generic (PLEG): container finished" podID="bd37b9c7-fbaf-4418-b7cd-fea504b855a1" containerID="520248dc560c68e3aa222e1e8c52098d3de1148ab98717ebf27ed36ff5facf61" exitCode=0 Jan 07 10:15:32 crc kubenswrapper[5131]: I0107 10:15:32.246625 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bd37b9c7-fbaf-4418-b7cd-fea504b855a1","Type":"ContainerDied","Data":"520248dc560c68e3aa222e1e8c52098d3de1148ab98717ebf27ed36ff5facf61"} Jan 07 10:15:32 crc kubenswrapper[5131]: I0107 10:15:32.277151 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_bd37b9c7-fbaf-4418-b7cd-fea504b855a1/manage-dockerfile/0.log" Jan 07 10:15:33 crc kubenswrapper[5131]: I0107 10:15:33.261463 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bd37b9c7-fbaf-4418-b7cd-fea504b855a1","Type":"ContainerStarted","Data":"12181e20a5b941543ec1df8ea71101bff6682109b3b49a63d64be08cbf22fc1e"} Jan 07 10:15:33 crc kubenswrapper[5131]: I0107 10:15:33.296356 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=5.296326597 podStartE2EDuration="5.296326597s" podCreationTimestamp="2026-01-07 10:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:15:33.289612281 +0000 UTC m=+1561.455913855" watchObservedRunningTime="2026-01-07 10:15:33.296326597 +0000 UTC m=+1561.462628191" Jan 07 10:15:37 crc kubenswrapper[5131]: I0107 10:15:37.291821 5131 generic.go:358] "Generic (PLEG): container finished" podID="bd37b9c7-fbaf-4418-b7cd-fea504b855a1" containerID="12181e20a5b941543ec1df8ea71101bff6682109b3b49a63d64be08cbf22fc1e" exitCode=0 Jan 07 10:15:37 crc kubenswrapper[5131]: I0107 10:15:37.291909 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bd37b9c7-fbaf-4418-b7cd-fea504b855a1","Type":"ContainerDied","Data":"12181e20a5b941543ec1df8ea71101bff6682109b3b49a63d64be08cbf22fc1e"} Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.665454 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.813422 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-ca-bundles\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.813475 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-push\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.813570 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildworkdir\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.813604 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-pull\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.813778 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-root\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.813913 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-run\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.813951 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-proxy-ca-bundles\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.814044 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-system-configs\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.814106 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-blob-cache\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.814158 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff8bf\" (UniqueName: \"kubernetes.io/projected/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-kube-api-access-ff8bf\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.814193 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildcachedir\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.814238 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-node-pullsecrets\") pod \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\" (UID: \"bd37b9c7-fbaf-4418-b7cd-fea504b855a1\") " Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.814514 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.814613 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.814722 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.814786 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.815480 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.815517 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.815518 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.815537 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.815603 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.815629 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.816220 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.816368 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.819711 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.821175 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-kube-api-access-ff8bf" (OuterVolumeSpecName: "kube-api-access-ff8bf") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "kube-api-access-ff8bf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.821247 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.821459 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "bd37b9c7-fbaf-4418-b7cd-fea504b855a1" (UID: "bd37b9c7-fbaf-4418-b7cd-fea504b855a1"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.917637 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.917718 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.917747 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.917771 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.917794 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ff8bf\" (UniqueName: \"kubernetes.io/projected/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-kube-api-access-ff8bf\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.917826 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.917883 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:38 crc kubenswrapper[5131]: I0107 10:15:38.917908 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/bd37b9c7-fbaf-4418-b7cd-fea504b855a1-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:15:39 crc kubenswrapper[5131]: I0107 10:15:39.314207 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bd37b9c7-fbaf-4418-b7cd-fea504b855a1","Type":"ContainerDied","Data":"ac584c60ada6a852d16277ddfeb14bda1b9c0bbf640a68ad65b0607fd0f3245c"} Jan 07 10:15:39 crc kubenswrapper[5131]: I0107 10:15:39.314256 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 07 10:15:39 crc kubenswrapper[5131]: I0107 10:15:39.314270 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac584c60ada6a852d16277ddfeb14bda1b9c0bbf640a68ad65b0607fd0f3245c" Jan 07 10:15:51 crc kubenswrapper[5131]: I0107 10:15:51.161009 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:15:51 crc kubenswrapper[5131]: I0107 10:15:51.161498 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.280154 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.281804 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bd37b9c7-fbaf-4418-b7cd-fea504b855a1" containerName="git-clone" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.281826 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd37b9c7-fbaf-4418-b7cd-fea504b855a1" containerName="git-clone" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.281913 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bd37b9c7-fbaf-4418-b7cd-fea504b855a1" containerName="manage-dockerfile" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.281926 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd37b9c7-fbaf-4418-b7cd-fea504b855a1" containerName="manage-dockerfile" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.281947 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bd37b9c7-fbaf-4418-b7cd-fea504b855a1" containerName="docker-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.281961 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd37b9c7-fbaf-4418-b7cd-fea504b855a1" containerName="docker-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.282171 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="bd37b9c7-fbaf-4418-b7cd-fea504b855a1" containerName="docker-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.287737 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.290284 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.290620 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.290871 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.291138 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.291419 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-vc6bg\"" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.320288 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.350453 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.350552 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.350723 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.350773 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.350884 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.351003 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.351134 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.351216 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.351323 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.351416 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.351521 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7ftt\" (UniqueName: \"kubernetes.io/projected/cfc74efa-7466-492d-9c61-872725d8696b-kube-api-access-z7ftt\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.351646 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.351746 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.453341 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.453494 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.453604 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.454184 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.454301 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.455317 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.455488 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.455581 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.456370 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.456569 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.456708 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.457101 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.457384 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.457486 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.457544 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.457585 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.457628 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.457689 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z7ftt\" (UniqueName: \"kubernetes.io/projected/cfc74efa-7466-492d-9c61-872725d8696b-kube-api-access-z7ftt\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.457822 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.457964 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.458008 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.458265 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.463067 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.463203 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.475138 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.478943 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7ftt\" (UniqueName: \"kubernetes.io/projected/cfc74efa-7466-492d-9c61-872725d8696b-kube-api-access-z7ftt\") pod \"service-telemetry-framework-index-1-build\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.617734 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:15:57 crc kubenswrapper[5131]: I0107 10:15:57.896254 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 07 10:15:58 crc kubenswrapper[5131]: I0107 10:15:58.239309 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cfc74efa-7466-492d-9c61-872725d8696b","Type":"ContainerStarted","Data":"8374f9123a88eb1784683875120933b82f389aceacb399ed005bfc145686dafd"} Jan 07 10:15:58 crc kubenswrapper[5131]: I0107 10:15:58.239992 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cfc74efa-7466-492d-9c61-872725d8696b","Type":"ContainerStarted","Data":"5c4a41054cbdd267b6c1c63f55705771c337b1b88bfb40fbeebac40d55305256"} Jan 07 10:15:59 crc kubenswrapper[5131]: I0107 10:15:59.248235 5131 generic.go:358] "Generic (PLEG): container finished" podID="cfc74efa-7466-492d-9c61-872725d8696b" containerID="8374f9123a88eb1784683875120933b82f389aceacb399ed005bfc145686dafd" exitCode=0 Jan 07 10:15:59 crc kubenswrapper[5131]: I0107 10:15:59.248525 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cfc74efa-7466-492d-9c61-872725d8696b","Type":"ContainerDied","Data":"8374f9123a88eb1784683875120933b82f389aceacb399ed005bfc145686dafd"} Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.143403 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463016-vw4p6"] Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.170184 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463016-vw4p6" Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.172071 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463016-vw4p6"] Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.173266 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.174061 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.180397 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.256523 5131 generic.go:358] "Generic (PLEG): container finished" podID="cfc74efa-7466-492d-9c61-872725d8696b" containerID="83752681c263bf800213aa1f5e920561462033ab0123c8cfc71e6edc7be02101" exitCode=0 Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.256606 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cfc74efa-7466-492d-9c61-872725d8696b","Type":"ContainerDied","Data":"83752681c263bf800213aa1f5e920561462033ab0123c8cfc71e6edc7be02101"} Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.287857 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_cfc74efa-7466-492d-9c61-872725d8696b/manage-dockerfile/0.log" Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.313552 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggn6t\" (UniqueName: \"kubernetes.io/projected/9a45faf0-6e45-472b-a9eb-118cdf319d61-kube-api-access-ggn6t\") pod \"auto-csr-approver-29463016-vw4p6\" (UID: \"9a45faf0-6e45-472b-a9eb-118cdf319d61\") " pod="openshift-infra/auto-csr-approver-29463016-vw4p6" Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.414801 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ggn6t\" (UniqueName: \"kubernetes.io/projected/9a45faf0-6e45-472b-a9eb-118cdf319d61-kube-api-access-ggn6t\") pod \"auto-csr-approver-29463016-vw4p6\" (UID: \"9a45faf0-6e45-472b-a9eb-118cdf319d61\") " pod="openshift-infra/auto-csr-approver-29463016-vw4p6" Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.440303 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggn6t\" (UniqueName: \"kubernetes.io/projected/9a45faf0-6e45-472b-a9eb-118cdf319d61-kube-api-access-ggn6t\") pod \"auto-csr-approver-29463016-vw4p6\" (UID: \"9a45faf0-6e45-472b-a9eb-118cdf319d61\") " pod="openshift-infra/auto-csr-approver-29463016-vw4p6" Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.495051 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463016-vw4p6" Jan 07 10:16:00 crc kubenswrapper[5131]: I0107 10:16:00.723433 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463016-vw4p6"] Jan 07 10:16:01 crc kubenswrapper[5131]: I0107 10:16:01.267468 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463016-vw4p6" event={"ID":"9a45faf0-6e45-472b-a9eb-118cdf319d61","Type":"ContainerStarted","Data":"08f459a98d4b269a9fdf8e4c52c4c380378beef93a15e29e21352f7f8f887079"} Jan 07 10:16:01 crc kubenswrapper[5131]: I0107 10:16:01.270785 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cfc74efa-7466-492d-9c61-872725d8696b","Type":"ContainerStarted","Data":"ef0af01e704a8504fc32eb0fed0b4ffdff116e37d5d024b8a6da72399cb72883"} Jan 07 10:16:01 crc kubenswrapper[5131]: I0107 10:16:01.323649 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=4.323616296 podStartE2EDuration="4.323616296s" podCreationTimestamp="2026-01-07 10:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:16:01.312637825 +0000 UTC m=+1589.478939409" watchObservedRunningTime="2026-01-07 10:16:01.323616296 +0000 UTC m=+1589.489917900" Jan 07 10:16:02 crc kubenswrapper[5131]: I0107 10:16:02.280803 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463016-vw4p6" event={"ID":"9a45faf0-6e45-472b-a9eb-118cdf319d61","Type":"ContainerStarted","Data":"df74a4092217e2b7e8717ba9e233f72568342ba718f8035b5a919337968a3e18"} Jan 07 10:16:03 crc kubenswrapper[5131]: I0107 10:16:03.291497 5131 generic.go:358] "Generic (PLEG): container finished" podID="9a45faf0-6e45-472b-a9eb-118cdf319d61" containerID="df74a4092217e2b7e8717ba9e233f72568342ba718f8035b5a919337968a3e18" exitCode=0 Jan 07 10:16:03 crc kubenswrapper[5131]: I0107 10:16:03.291619 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463016-vw4p6" event={"ID":"9a45faf0-6e45-472b-a9eb-118cdf319d61","Type":"ContainerDied","Data":"df74a4092217e2b7e8717ba9e233f72568342ba718f8035b5a919337968a3e18"} Jan 07 10:16:04 crc kubenswrapper[5131]: I0107 10:16:04.554328 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463016-vw4p6" Jan 07 10:16:04 crc kubenswrapper[5131]: I0107 10:16:04.695983 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggn6t\" (UniqueName: \"kubernetes.io/projected/9a45faf0-6e45-472b-a9eb-118cdf319d61-kube-api-access-ggn6t\") pod \"9a45faf0-6e45-472b-a9eb-118cdf319d61\" (UID: \"9a45faf0-6e45-472b-a9eb-118cdf319d61\") " Jan 07 10:16:04 crc kubenswrapper[5131]: I0107 10:16:04.701269 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a45faf0-6e45-472b-a9eb-118cdf319d61-kube-api-access-ggn6t" (OuterVolumeSpecName: "kube-api-access-ggn6t") pod "9a45faf0-6e45-472b-a9eb-118cdf319d61" (UID: "9a45faf0-6e45-472b-a9eb-118cdf319d61"). InnerVolumeSpecName "kube-api-access-ggn6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:16:04 crc kubenswrapper[5131]: I0107 10:16:04.797647 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ggn6t\" (UniqueName: \"kubernetes.io/projected/9a45faf0-6e45-472b-a9eb-118cdf319d61-kube-api-access-ggn6t\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:05 crc kubenswrapper[5131]: I0107 10:16:05.279508 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463010-h8fm9"] Jan 07 10:16:05 crc kubenswrapper[5131]: I0107 10:16:05.285920 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463010-h8fm9"] Jan 07 10:16:05 crc kubenswrapper[5131]: I0107 10:16:05.305533 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463016-vw4p6" event={"ID":"9a45faf0-6e45-472b-a9eb-118cdf319d61","Type":"ContainerDied","Data":"08f459a98d4b269a9fdf8e4c52c4c380378beef93a15e29e21352f7f8f887079"} Jan 07 10:16:05 crc kubenswrapper[5131]: I0107 10:16:05.305593 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f459a98d4b269a9fdf8e4c52c4c380378beef93a15e29e21352f7f8f887079" Jan 07 10:16:05 crc kubenswrapper[5131]: I0107 10:16:05.305691 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463016-vw4p6" Jan 07 10:16:06 crc kubenswrapper[5131]: I0107 10:16:06.190201 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e840dc4-d123-4300-b964-e41fab140d92" path="/var/lib/kubelet/pods/5e840dc4-d123-4300-b964-e41fab140d92/volumes" Jan 07 10:16:20 crc kubenswrapper[5131]: I0107 10:16:20.662940 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:16:20 crc kubenswrapper[5131]: I0107 10:16:20.663371 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:16:34 crc kubenswrapper[5131]: I0107 10:16:34.504023 5131 scope.go:117] "RemoveContainer" containerID="3b4c65dbc24e307a05bf3fa51b6a48cb6142b0e61dcbbaa514e8312044c730b6" Jan 07 10:16:47 crc kubenswrapper[5131]: I0107 10:16:47.665506 5131 generic.go:358] "Generic (PLEG): container finished" podID="cfc74efa-7466-492d-9c61-872725d8696b" containerID="ef0af01e704a8504fc32eb0fed0b4ffdff116e37d5d024b8a6da72399cb72883" exitCode=0 Jan 07 10:16:47 crc kubenswrapper[5131]: I0107 10:16:47.666144 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cfc74efa-7466-492d-9c61-872725d8696b","Type":"ContainerDied","Data":"ef0af01e704a8504fc32eb0fed0b4ffdff116e37d5d024b8a6da72399cb72883"} Jan 07 10:16:48 crc kubenswrapper[5131]: I0107 10:16:48.966625 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.069206 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.069490 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-root\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.070960 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-system-configs\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071026 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-buildworkdir\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071121 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-ca-bundles\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071215 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-proxy-ca-bundles\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071263 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-pull\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071316 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-node-pullsecrets\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071341 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-run\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071385 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-build-blob-cache\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071413 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7ftt\" (UniqueName: \"kubernetes.io/projected/cfc74efa-7466-492d-9c61-872725d8696b-kube-api-access-z7ftt\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071417 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071521 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-push\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071544 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-buildcachedir\") pod \"cfc74efa-7466-492d-9c61-872725d8696b\" (UID: \"cfc74efa-7466-492d-9c61-872725d8696b\") " Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071780 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.071844 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.072088 5131 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.072111 5131 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cfc74efa-7466-492d-9c61-872725d8696b-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.072123 5131 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.072379 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.072893 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.073006 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.073676 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.076246 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-pull" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-pull") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "builder-dockercfg-vc6bg-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.076533 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.077810 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-push" (OuterVolumeSpecName: "builder-dockercfg-vc6bg-push") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "builder-dockercfg-vc6bg-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.083714 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc74efa-7466-492d-9c61-872725d8696b-kube-api-access-z7ftt" (OuterVolumeSpecName: "kube-api-access-z7ftt") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "kube-api-access-z7ftt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.173906 5131 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.173935 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-pull\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-pull\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.173944 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.173952 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z7ftt\" (UniqueName: \"kubernetes.io/projected/cfc74efa-7466-492d-9c61-872725d8696b-kube-api-access-z7ftt\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.173985 5131 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-vc6bg-push\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-builder-dockercfg-vc6bg-push\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.173994 5131 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cfc74efa-7466-492d-9c61-872725d8696b-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.174006 5131 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.174015 5131 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfc74efa-7466-492d-9c61-872725d8696b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.281220 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.377101 5131 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.683134 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.683155 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"cfc74efa-7466-492d-9c61-872725d8696b","Type":"ContainerDied","Data":"5c4a41054cbdd267b6c1c63f55705771c337b1b88bfb40fbeebac40d55305256"} Jan 07 10:16:49 crc kubenswrapper[5131]: I0107 10:16:49.683625 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c4a41054cbdd267b6c1c63f55705771c337b1b88bfb40fbeebac40d55305256" Jan 07 10:16:50 crc kubenswrapper[5131]: I0107 10:16:50.181692 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cfc74efa-7466-492d-9c61-872725d8696b" (UID: "cfc74efa-7466-492d-9c61-872725d8696b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:16:50 crc kubenswrapper[5131]: I0107 10:16:50.189062 5131 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cfc74efa-7466-492d-9c61-872725d8696b-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 07 10:16:50 crc kubenswrapper[5131]: I0107 10:16:50.663511 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:16:50 crc kubenswrapper[5131]: I0107 10:16:50.663580 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:16:50 crc kubenswrapper[5131]: I0107 10:16:50.663620 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 10:16:50 crc kubenswrapper[5131]: I0107 10:16:50.664194 5131 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25"} pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 10:16:50 crc kubenswrapper[5131]: I0107 10:16:50.664266 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" containerID="cri-o://9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" gracePeriod=600 Jan 07 10:16:51 crc kubenswrapper[5131]: E0107 10:16:51.299505 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:16:51 crc kubenswrapper[5131]: I0107 10:16:51.717400 5131 generic.go:358] "Generic (PLEG): container finished" podID="3942e752-44ba-4678-8723-6cd778e60d73" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" exitCode=0 Jan 07 10:16:51 crc kubenswrapper[5131]: I0107 10:16:51.717515 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerDied","Data":"9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25"} Jan 07 10:16:51 crc kubenswrapper[5131]: I0107 10:16:51.717603 5131 scope.go:117] "RemoveContainer" containerID="13b258610a3045e67e9e5de274b918c4da88f0376e2747328b51f3ef9deaf0e0" Jan 07 10:16:51 crc kubenswrapper[5131]: I0107 10:16:51.718448 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:16:51 crc kubenswrapper[5131]: E0107 10:16:51.718977 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.962207 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-llfk8"] Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.964129 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cfc74efa-7466-492d-9c61-872725d8696b" containerName="git-clone" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.964153 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc74efa-7466-492d-9c61-872725d8696b" containerName="git-clone" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.964251 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cfc74efa-7466-492d-9c61-872725d8696b" containerName="manage-dockerfile" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.964266 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc74efa-7466-492d-9c61-872725d8696b" containerName="manage-dockerfile" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.964572 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9a45faf0-6e45-472b-a9eb-118cdf319d61" containerName="oc" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.964591 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a45faf0-6e45-472b-a9eb-118cdf319d61" containerName="oc" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.964607 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cfc74efa-7466-492d-9c61-872725d8696b" containerName="docker-build" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.964616 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc74efa-7466-492d-9c61-872725d8696b" containerName="docker-build" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.964978 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="9a45faf0-6e45-472b-a9eb-118cdf319d61" containerName="oc" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.965041 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="cfc74efa-7466-492d-9c61-872725d8696b" containerName="docker-build" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.988570 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-llfk8"] Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.988781 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-llfk8" Jan 07 10:16:52 crc kubenswrapper[5131]: I0107 10:16:52.994139 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-rxkpj\"" Jan 07 10:16:53 crc kubenswrapper[5131]: I0107 10:16:53.130282 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n8mk\" (UniqueName: \"kubernetes.io/projected/e33dae2b-728f-4c22-8f27-490abf29f905-kube-api-access-9n8mk\") pod \"infrawatch-operators-llfk8\" (UID: \"e33dae2b-728f-4c22-8f27-490abf29f905\") " pod="service-telemetry/infrawatch-operators-llfk8" Jan 07 10:16:53 crc kubenswrapper[5131]: I0107 10:16:53.232813 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9n8mk\" (UniqueName: \"kubernetes.io/projected/e33dae2b-728f-4c22-8f27-490abf29f905-kube-api-access-9n8mk\") pod \"infrawatch-operators-llfk8\" (UID: \"e33dae2b-728f-4c22-8f27-490abf29f905\") " pod="service-telemetry/infrawatch-operators-llfk8" Jan 07 10:16:53 crc kubenswrapper[5131]: I0107 10:16:53.255576 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n8mk\" (UniqueName: \"kubernetes.io/projected/e33dae2b-728f-4c22-8f27-490abf29f905-kube-api-access-9n8mk\") pod \"infrawatch-operators-llfk8\" (UID: \"e33dae2b-728f-4c22-8f27-490abf29f905\") " pod="service-telemetry/infrawatch-operators-llfk8" Jan 07 10:16:53 crc kubenswrapper[5131]: I0107 10:16:53.314276 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-llfk8" Jan 07 10:16:53 crc kubenswrapper[5131]: I0107 10:16:53.588166 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-llfk8"] Jan 07 10:16:53 crc kubenswrapper[5131]: I0107 10:16:53.598874 5131 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 10:16:53 crc kubenswrapper[5131]: I0107 10:16:53.739948 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-llfk8" event={"ID":"e33dae2b-728f-4c22-8f27-490abf29f905","Type":"ContainerStarted","Data":"544bc9949b12c294de883010fec3c60f996ff3a7cc2be6bc149e2ec923b9af3a"} Jan 07 10:17:02 crc kubenswrapper[5131]: I0107 10:17:02.189995 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:17:02 crc kubenswrapper[5131]: E0107 10:17:02.191323 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:17:05 crc kubenswrapper[5131]: I0107 10:17:05.835042 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-llfk8" event={"ID":"e33dae2b-728f-4c22-8f27-490abf29f905","Type":"ContainerStarted","Data":"905b809288da435538f46563b72debb857b436a4ceea4c92f95ac73a8be05677"} Jan 07 10:17:05 crc kubenswrapper[5131]: I0107 10:17:05.855480 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-llfk8" podStartSLOduration=2.730855567 podStartE2EDuration="13.855455604s" podCreationTimestamp="2026-01-07 10:16:52 +0000 UTC" firstStartedPulling="2026-01-07 10:16:53.599177649 +0000 UTC m=+1641.765479213" lastFinishedPulling="2026-01-07 10:17:04.723777686 +0000 UTC m=+1652.890079250" observedRunningTime="2026-01-07 10:17:05.854313735 +0000 UTC m=+1654.020615339" watchObservedRunningTime="2026-01-07 10:17:05.855455604 +0000 UTC m=+1654.021757208" Jan 07 10:17:13 crc kubenswrapper[5131]: I0107 10:17:13.315075 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-llfk8" Jan 07 10:17:13 crc kubenswrapper[5131]: I0107 10:17:13.315740 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-llfk8" Jan 07 10:17:13 crc kubenswrapper[5131]: I0107 10:17:13.344577 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-llfk8" Jan 07 10:17:13 crc kubenswrapper[5131]: I0107 10:17:13.956265 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-llfk8" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.180907 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:17:15 crc kubenswrapper[5131]: E0107 10:17:15.181362 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.621885 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd"] Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.645554 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.649238 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd"] Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.779106 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.779179 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg9lz\" (UniqueName: \"kubernetes.io/projected/19dc369d-77d8-473a-86f9-b252306cbe4b-kube-api-access-rg9lz\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.779269 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.881006 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.881331 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rg9lz\" (UniqueName: \"kubernetes.io/projected/19dc369d-77d8-473a-86f9-b252306cbe4b-kube-api-access-rg9lz\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.881393 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.881730 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.881810 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.904472 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg9lz\" (UniqueName: \"kubernetes.io/projected/19dc369d-77d8-473a-86f9-b252306cbe4b-kube-api-access-rg9lz\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:15 crc kubenswrapper[5131]: I0107 10:17:15.977071 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.409697 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv"] Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.420233 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.425950 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv"] Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.449862 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd"] Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.591346 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwfvq\" (UniqueName: \"kubernetes.io/projected/f2f68d64-4c2a-4064-9087-fa806d271914-kube-api-access-gwfvq\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.591433 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.591504 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.692688 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.692736 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.692781 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwfvq\" (UniqueName: \"kubernetes.io/projected/f2f68d64-4c2a-4064-9087-fa806d271914-kube-api-access-gwfvq\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.693389 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.693896 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.724578 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwfvq\" (UniqueName: \"kubernetes.io/projected/f2f68d64-4c2a-4064-9087-fa806d271914-kube-api-access-gwfvq\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.748512 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.941255 5131 generic.go:358] "Generic (PLEG): container finished" podID="19dc369d-77d8-473a-86f9-b252306cbe4b" containerID="124a11c96bd30bf9a9f40f87b2a337b9a5df982068ffce915855daaf6eb642a4" exitCode=0 Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.941336 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" event={"ID":"19dc369d-77d8-473a-86f9-b252306cbe4b","Type":"ContainerDied","Data":"124a11c96bd30bf9a9f40f87b2a337b9a5df982068ffce915855daaf6eb642a4"} Jan 07 10:17:16 crc kubenswrapper[5131]: I0107 10:17:16.941775 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" event={"ID":"19dc369d-77d8-473a-86f9-b252306cbe4b","Type":"ContainerStarted","Data":"64a9057c5bd880ed39c44938afe59ebd05c0242994a01ba132d9374bf619ec9c"} Jan 07 10:17:17 crc kubenswrapper[5131]: I0107 10:17:17.222348 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv"] Jan 07 10:17:17 crc kubenswrapper[5131]: W0107 10:17:17.233226 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f68d64_4c2a_4064_9087_fa806d271914.slice/crio-c9c042fbc3f34e172cd86793cbc71cc4fc1590d9450b3968cb3c36f7915da8bb WatchSource:0}: Error finding container c9c042fbc3f34e172cd86793cbc71cc4fc1590d9450b3968cb3c36f7915da8bb: Status 404 returned error can't find the container with id c9c042fbc3f34e172cd86793cbc71cc4fc1590d9450b3968cb3c36f7915da8bb Jan 07 10:17:17 crc kubenswrapper[5131]: I0107 10:17:17.954192 5131 generic.go:358] "Generic (PLEG): container finished" podID="f2f68d64-4c2a-4064-9087-fa806d271914" containerID="52024deef9003c3f67f5a5a3826160d00398ed6d20338c71acad7c5fba03093b" exitCode=0 Jan 07 10:17:17 crc kubenswrapper[5131]: I0107 10:17:17.954422 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" event={"ID":"f2f68d64-4c2a-4064-9087-fa806d271914","Type":"ContainerDied","Data":"52024deef9003c3f67f5a5a3826160d00398ed6d20338c71acad7c5fba03093b"} Jan 07 10:17:17 crc kubenswrapper[5131]: I0107 10:17:17.954497 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" event={"ID":"f2f68d64-4c2a-4064-9087-fa806d271914","Type":"ContainerStarted","Data":"c9c042fbc3f34e172cd86793cbc71cc4fc1590d9450b3968cb3c36f7915da8bb"} Jan 07 10:17:17 crc kubenswrapper[5131]: I0107 10:17:17.961503 5131 generic.go:358] "Generic (PLEG): container finished" podID="19dc369d-77d8-473a-86f9-b252306cbe4b" containerID="d49423059c96ba5822df14a6766b5f4fd865b14711a042360f22b24906c7dc89" exitCode=0 Jan 07 10:17:17 crc kubenswrapper[5131]: I0107 10:17:17.961702 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" event={"ID":"19dc369d-77d8-473a-86f9-b252306cbe4b","Type":"ContainerDied","Data":"d49423059c96ba5822df14a6766b5f4fd865b14711a042360f22b24906c7dc89"} Jan 07 10:17:18 crc kubenswrapper[5131]: I0107 10:17:18.972196 5131 generic.go:358] "Generic (PLEG): container finished" podID="f2f68d64-4c2a-4064-9087-fa806d271914" containerID="1f3cb7b189f5c269fb1cd811119ab91778cad46058f8e4dea86f3d9af2cb21d7" exitCode=0 Jan 07 10:17:18 crc kubenswrapper[5131]: I0107 10:17:18.972303 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" event={"ID":"f2f68d64-4c2a-4064-9087-fa806d271914","Type":"ContainerDied","Data":"1f3cb7b189f5c269fb1cd811119ab91778cad46058f8e4dea86f3d9af2cb21d7"} Jan 07 10:17:18 crc kubenswrapper[5131]: I0107 10:17:18.979732 5131 generic.go:358] "Generic (PLEG): container finished" podID="19dc369d-77d8-473a-86f9-b252306cbe4b" containerID="b5970580a3ade9bd0e8c407fbcac8d30b1a43b22a27575033de08edddf265f79" exitCode=0 Jan 07 10:17:18 crc kubenswrapper[5131]: I0107 10:17:18.979853 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" event={"ID":"19dc369d-77d8-473a-86f9-b252306cbe4b","Type":"ContainerDied","Data":"b5970580a3ade9bd0e8c407fbcac8d30b1a43b22a27575033de08edddf265f79"} Jan 07 10:17:19 crc kubenswrapper[5131]: I0107 10:17:19.991892 5131 generic.go:358] "Generic (PLEG): container finished" podID="f2f68d64-4c2a-4064-9087-fa806d271914" containerID="ae1a76710a1954fda1b1876ead90873e686f9037e7abae66bff398adb592597e" exitCode=0 Jan 07 10:17:19 crc kubenswrapper[5131]: I0107 10:17:19.991995 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" event={"ID":"f2f68d64-4c2a-4064-9087-fa806d271914","Type":"ContainerDied","Data":"ae1a76710a1954fda1b1876ead90873e686f9037e7abae66bff398adb592597e"} Jan 07 10:17:20 crc kubenswrapper[5131]: I0107 10:17:20.274229 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:20 crc kubenswrapper[5131]: I0107 10:17:20.361197 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg9lz\" (UniqueName: \"kubernetes.io/projected/19dc369d-77d8-473a-86f9-b252306cbe4b-kube-api-access-rg9lz\") pod \"19dc369d-77d8-473a-86f9-b252306cbe4b\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " Jan 07 10:17:20 crc kubenswrapper[5131]: I0107 10:17:20.361330 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-bundle\") pod \"19dc369d-77d8-473a-86f9-b252306cbe4b\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " Jan 07 10:17:20 crc kubenswrapper[5131]: I0107 10:17:20.362229 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-bundle" (OuterVolumeSpecName: "bundle") pod "19dc369d-77d8-473a-86f9-b252306cbe4b" (UID: "19dc369d-77d8-473a-86f9-b252306cbe4b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:17:20 crc kubenswrapper[5131]: I0107 10:17:20.367170 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19dc369d-77d8-473a-86f9-b252306cbe4b-kube-api-access-rg9lz" (OuterVolumeSpecName: "kube-api-access-rg9lz") pod "19dc369d-77d8-473a-86f9-b252306cbe4b" (UID: "19dc369d-77d8-473a-86f9-b252306cbe4b"). InnerVolumeSpecName "kube-api-access-rg9lz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:17:20 crc kubenswrapper[5131]: I0107 10:17:20.462714 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-util\") pod \"19dc369d-77d8-473a-86f9-b252306cbe4b\" (UID: \"19dc369d-77d8-473a-86f9-b252306cbe4b\") " Jan 07 10:17:20 crc kubenswrapper[5131]: I0107 10:17:20.463197 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rg9lz\" (UniqueName: \"kubernetes.io/projected/19dc369d-77d8-473a-86f9-b252306cbe4b-kube-api-access-rg9lz\") on node \"crc\" DevicePath \"\"" Jan 07 10:17:20 crc kubenswrapper[5131]: I0107 10:17:20.463222 5131 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 10:17:20 crc kubenswrapper[5131]: I0107 10:17:20.492103 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-util" (OuterVolumeSpecName: "util") pod "19dc369d-77d8-473a-86f9-b252306cbe4b" (UID: "19dc369d-77d8-473a-86f9-b252306cbe4b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:17:20 crc kubenswrapper[5131]: I0107 10:17:20.564138 5131 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19dc369d-77d8-473a-86f9-b252306cbe4b-util\") on node \"crc\" DevicePath \"\"" Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.002716 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.005161 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65agm2xd" event={"ID":"19dc369d-77d8-473a-86f9-b252306cbe4b","Type":"ContainerDied","Data":"64a9057c5bd880ed39c44938afe59ebd05c0242994a01ba132d9374bf619ec9c"} Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.005239 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64a9057c5bd880ed39c44938afe59ebd05c0242994a01ba132d9374bf619ec9c" Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.406748 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.477481 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwfvq\" (UniqueName: \"kubernetes.io/projected/f2f68d64-4c2a-4064-9087-fa806d271914-kube-api-access-gwfvq\") pod \"f2f68d64-4c2a-4064-9087-fa806d271914\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.477547 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-util\") pod \"f2f68d64-4c2a-4064-9087-fa806d271914\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.477737 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-bundle\") pod \"f2f68d64-4c2a-4064-9087-fa806d271914\" (UID: \"f2f68d64-4c2a-4064-9087-fa806d271914\") " Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.478979 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-bundle" (OuterVolumeSpecName: "bundle") pod "f2f68d64-4c2a-4064-9087-fa806d271914" (UID: "f2f68d64-4c2a-4064-9087-fa806d271914"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.489219 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f68d64-4c2a-4064-9087-fa806d271914-kube-api-access-gwfvq" (OuterVolumeSpecName: "kube-api-access-gwfvq") pod "f2f68d64-4c2a-4064-9087-fa806d271914" (UID: "f2f68d64-4c2a-4064-9087-fa806d271914"). InnerVolumeSpecName "kube-api-access-gwfvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.508547 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-util" (OuterVolumeSpecName: "util") pod "f2f68d64-4c2a-4064-9087-fa806d271914" (UID: "f2f68d64-4c2a-4064-9087-fa806d271914"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.579053 5131 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-bundle\") on node \"crc\" DevicePath \"\"" Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.579088 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gwfvq\" (UniqueName: \"kubernetes.io/projected/f2f68d64-4c2a-4064-9087-fa806d271914-kube-api-access-gwfvq\") on node \"crc\" DevicePath \"\"" Jan 07 10:17:21 crc kubenswrapper[5131]: I0107 10:17:21.579097 5131 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2f68d64-4c2a-4064-9087-fa806d271914-util\") on node \"crc\" DevicePath \"\"" Jan 07 10:17:22 crc kubenswrapper[5131]: I0107 10:17:22.021995 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" Jan 07 10:17:22 crc kubenswrapper[5131]: I0107 10:17:22.021994 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09552mv" event={"ID":"f2f68d64-4c2a-4064-9087-fa806d271914","Type":"ContainerDied","Data":"c9c042fbc3f34e172cd86793cbc71cc4fc1590d9450b3968cb3c36f7915da8bb"} Jan 07 10:17:22 crc kubenswrapper[5131]: I0107 10:17:22.022153 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c042fbc3f34e172cd86793cbc71cc4fc1590d9450b3968cb3c36f7915da8bb" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.877344 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz"] Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878487 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f2f68d64-4c2a-4064-9087-fa806d271914" containerName="util" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878505 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f68d64-4c2a-4064-9087-fa806d271914" containerName="util" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878525 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19dc369d-77d8-473a-86f9-b252306cbe4b" containerName="util" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878533 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="19dc369d-77d8-473a-86f9-b252306cbe4b" containerName="util" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878545 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f2f68d64-4c2a-4064-9087-fa806d271914" containerName="pull" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878554 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f68d64-4c2a-4064-9087-fa806d271914" containerName="pull" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878572 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f2f68d64-4c2a-4064-9087-fa806d271914" containerName="extract" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878580 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f68d64-4c2a-4064-9087-fa806d271914" containerName="extract" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878596 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19dc369d-77d8-473a-86f9-b252306cbe4b" containerName="pull" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878603 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="19dc369d-77d8-473a-86f9-b252306cbe4b" containerName="pull" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878625 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19dc369d-77d8-473a-86f9-b252306cbe4b" containerName="extract" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878632 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="19dc369d-77d8-473a-86f9-b252306cbe4b" containerName="extract" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878755 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="f2f68d64-4c2a-4064-9087-fa806d271914" containerName="extract" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.878782 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="19dc369d-77d8-473a-86f9-b252306cbe4b" containerName="extract" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.993549 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz"] Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.993766 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz" Jan 07 10:17:25 crc kubenswrapper[5131]: I0107 10:17:25.995633 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-gqqnw\"" Jan 07 10:17:26 crc kubenswrapper[5131]: I0107 10:17:26.041099 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2tg6\" (UniqueName: \"kubernetes.io/projected/f6080c03-03ff-4838-9364-c576264256a4-kube-api-access-v2tg6\") pod \"smart-gateway-operator-55d55b9dd-hgtkz\" (UID: \"f6080c03-03ff-4838-9364-c576264256a4\") " pod="service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz" Jan 07 10:17:26 crc kubenswrapper[5131]: I0107 10:17:26.041151 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/f6080c03-03ff-4838-9364-c576264256a4-runner\") pod \"smart-gateway-operator-55d55b9dd-hgtkz\" (UID: \"f6080c03-03ff-4838-9364-c576264256a4\") " pod="service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz" Jan 07 10:17:26 crc kubenswrapper[5131]: I0107 10:17:26.142793 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v2tg6\" (UniqueName: \"kubernetes.io/projected/f6080c03-03ff-4838-9364-c576264256a4-kube-api-access-v2tg6\") pod \"smart-gateway-operator-55d55b9dd-hgtkz\" (UID: \"f6080c03-03ff-4838-9364-c576264256a4\") " pod="service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz" Jan 07 10:17:26 crc kubenswrapper[5131]: I0107 10:17:26.142881 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/f6080c03-03ff-4838-9364-c576264256a4-runner\") pod \"smart-gateway-operator-55d55b9dd-hgtkz\" (UID: \"f6080c03-03ff-4838-9364-c576264256a4\") " pod="service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz" Jan 07 10:17:26 crc kubenswrapper[5131]: I0107 10:17:26.143438 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/f6080c03-03ff-4838-9364-c576264256a4-runner\") pod \"smart-gateway-operator-55d55b9dd-hgtkz\" (UID: \"f6080c03-03ff-4838-9364-c576264256a4\") " pod="service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz" Jan 07 10:17:26 crc kubenswrapper[5131]: I0107 10:17:26.170676 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2tg6\" (UniqueName: \"kubernetes.io/projected/f6080c03-03ff-4838-9364-c576264256a4-kube-api-access-v2tg6\") pod \"smart-gateway-operator-55d55b9dd-hgtkz\" (UID: \"f6080c03-03ff-4838-9364-c576264256a4\") " pod="service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz" Jan 07 10:17:26 crc kubenswrapper[5131]: I0107 10:17:26.308047 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz" Jan 07 10:17:26 crc kubenswrapper[5131]: I0107 10:17:26.740787 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz"] Jan 07 10:17:26 crc kubenswrapper[5131]: W0107 10:17:26.760246 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6080c03_03ff_4838_9364_c576264256a4.slice/crio-1d142beae91613b514a59f1752e2ad30445a82599e528c259d09eb4092e7b74a WatchSource:0}: Error finding container 1d142beae91613b514a59f1752e2ad30445a82599e528c259d09eb4092e7b74a: Status 404 returned error can't find the container with id 1d142beae91613b514a59f1752e2ad30445a82599e528c259d09eb4092e7b74a Jan 07 10:17:27 crc kubenswrapper[5131]: I0107 10:17:27.059403 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz" event={"ID":"f6080c03-03ff-4838-9364-c576264256a4","Type":"ContainerStarted","Data":"1d142beae91613b514a59f1752e2ad30445a82599e528c259d09eb4092e7b74a"} Jan 07 10:17:27 crc kubenswrapper[5131]: I0107 10:17:27.317662 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-6fc67b8db8-clpch"] Jan 07 10:17:27 crc kubenswrapper[5131]: I0107 10:17:27.696507 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6fc67b8db8-clpch"] Jan 07 10:17:27 crc kubenswrapper[5131]: I0107 10:17:27.696932 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6fc67b8db8-clpch" Jan 07 10:17:27 crc kubenswrapper[5131]: I0107 10:17:27.703375 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-2g2wh\"" Jan 07 10:17:27 crc kubenswrapper[5131]: I0107 10:17:27.763936 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8878a7e2-945b-4a2b-bb71-136be5087273-runner\") pod \"service-telemetry-operator-6fc67b8db8-clpch\" (UID: \"8878a7e2-945b-4a2b-bb71-136be5087273\") " pod="service-telemetry/service-telemetry-operator-6fc67b8db8-clpch" Jan 07 10:17:27 crc kubenswrapper[5131]: I0107 10:17:27.764392 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zhr9\" (UniqueName: \"kubernetes.io/projected/8878a7e2-945b-4a2b-bb71-136be5087273-kube-api-access-8zhr9\") pod \"service-telemetry-operator-6fc67b8db8-clpch\" (UID: \"8878a7e2-945b-4a2b-bb71-136be5087273\") " pod="service-telemetry/service-telemetry-operator-6fc67b8db8-clpch" Jan 07 10:17:27 crc kubenswrapper[5131]: I0107 10:17:27.865932 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8878a7e2-945b-4a2b-bb71-136be5087273-runner\") pod \"service-telemetry-operator-6fc67b8db8-clpch\" (UID: \"8878a7e2-945b-4a2b-bb71-136be5087273\") " pod="service-telemetry/service-telemetry-operator-6fc67b8db8-clpch" Jan 07 10:17:27 crc kubenswrapper[5131]: I0107 10:17:27.866097 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8zhr9\" (UniqueName: \"kubernetes.io/projected/8878a7e2-945b-4a2b-bb71-136be5087273-kube-api-access-8zhr9\") pod \"service-telemetry-operator-6fc67b8db8-clpch\" (UID: \"8878a7e2-945b-4a2b-bb71-136be5087273\") " pod="service-telemetry/service-telemetry-operator-6fc67b8db8-clpch" Jan 07 10:17:27 crc kubenswrapper[5131]: I0107 10:17:27.866650 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8878a7e2-945b-4a2b-bb71-136be5087273-runner\") pod \"service-telemetry-operator-6fc67b8db8-clpch\" (UID: \"8878a7e2-945b-4a2b-bb71-136be5087273\") " pod="service-telemetry/service-telemetry-operator-6fc67b8db8-clpch" Jan 07 10:17:27 crc kubenswrapper[5131]: I0107 10:17:27.892407 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zhr9\" (UniqueName: \"kubernetes.io/projected/8878a7e2-945b-4a2b-bb71-136be5087273-kube-api-access-8zhr9\") pod \"service-telemetry-operator-6fc67b8db8-clpch\" (UID: \"8878a7e2-945b-4a2b-bb71-136be5087273\") " pod="service-telemetry/service-telemetry-operator-6fc67b8db8-clpch" Jan 07 10:17:28 crc kubenswrapper[5131]: I0107 10:17:28.028308 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6fc67b8db8-clpch" Jan 07 10:17:28 crc kubenswrapper[5131]: I0107 10:17:28.474306 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6fc67b8db8-clpch"] Jan 07 10:17:29 crc kubenswrapper[5131]: I0107 10:17:29.082111 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6fc67b8db8-clpch" event={"ID":"8878a7e2-945b-4a2b-bb71-136be5087273","Type":"ContainerStarted","Data":"17de31fefb2c365a0bb23e5c632f964f4f9bb1c6eae18c82ee76ac27aaf1a937"} Jan 07 10:17:29 crc kubenswrapper[5131]: I0107 10:17:29.183553 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:17:29 crc kubenswrapper[5131]: E0107 10:17:29.183887 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:17:43 crc kubenswrapper[5131]: I0107 10:17:43.180433 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:17:43 crc kubenswrapper[5131]: E0107 10:17:43.181213 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:17:51 crc kubenswrapper[5131]: I0107 10:17:51.563337 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz" event={"ID":"f6080c03-03ff-4838-9364-c576264256a4","Type":"ContainerStarted","Data":"d5ffffcec285fc0f92a8cf7f49b99f462840d00ee573a8c5664dc1ea5449a10c"} Jan 07 10:17:51 crc kubenswrapper[5131]: I0107 10:17:51.565337 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6fc67b8db8-clpch" event={"ID":"8878a7e2-945b-4a2b-bb71-136be5087273","Type":"ContainerStarted","Data":"d8055081b874afb04d2aa5582f3ddaa3bae7e1f780c1f8994fbb676655fce883"} Jan 07 10:17:51 crc kubenswrapper[5131]: I0107 10:17:51.585203 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-55d55b9dd-hgtkz" podStartSLOduration=2.417557892 podStartE2EDuration="26.585183504s" podCreationTimestamp="2026-01-07 10:17:25 +0000 UTC" firstStartedPulling="2026-01-07 10:17:26.763472521 +0000 UTC m=+1674.929774085" lastFinishedPulling="2026-01-07 10:17:50.931098133 +0000 UTC m=+1699.097399697" observedRunningTime="2026-01-07 10:17:51.579528323 +0000 UTC m=+1699.745829897" watchObservedRunningTime="2026-01-07 10:17:51.585183504 +0000 UTC m=+1699.751485078" Jan 07 10:17:51 crc kubenswrapper[5131]: I0107 10:17:51.598705 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-6fc67b8db8-clpch" podStartSLOduration=2.156185291 podStartE2EDuration="24.598680581s" podCreationTimestamp="2026-01-07 10:17:27 +0000 UTC" firstStartedPulling="2026-01-07 10:17:28.508023206 +0000 UTC m=+1676.674324770" lastFinishedPulling="2026-01-07 10:17:50.950518496 +0000 UTC m=+1699.116820060" observedRunningTime="2026-01-07 10:17:51.592821225 +0000 UTC m=+1699.759122799" watchObservedRunningTime="2026-01-07 10:17:51.598680581 +0000 UTC m=+1699.764982155" Jan 07 10:17:56 crc kubenswrapper[5131]: I0107 10:17:56.180696 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:17:56 crc kubenswrapper[5131]: E0107 10:17:56.181320 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:18:00 crc kubenswrapper[5131]: I0107 10:18:00.140908 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463018-c7dtt"] Jan 07 10:18:00 crc kubenswrapper[5131]: I0107 10:18:00.639277 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463018-c7dtt" Jan 07 10:18:00 crc kubenswrapper[5131]: I0107 10:18:00.641956 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:18:00 crc kubenswrapper[5131]: I0107 10:18:00.643223 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:18:00 crc kubenswrapper[5131]: I0107 10:18:00.643344 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:18:00 crc kubenswrapper[5131]: I0107 10:18:00.648790 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463018-c7dtt"] Jan 07 10:18:00 crc kubenswrapper[5131]: I0107 10:18:00.796148 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j26f\" (UniqueName: \"kubernetes.io/projected/c2ab93fc-e076-411d-9769-d911fd2898b1-kube-api-access-6j26f\") pod \"auto-csr-approver-29463018-c7dtt\" (UID: \"c2ab93fc-e076-411d-9769-d911fd2898b1\") " pod="openshift-infra/auto-csr-approver-29463018-c7dtt" Jan 07 10:18:00 crc kubenswrapper[5131]: I0107 10:18:00.897728 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6j26f\" (UniqueName: \"kubernetes.io/projected/c2ab93fc-e076-411d-9769-d911fd2898b1-kube-api-access-6j26f\") pod \"auto-csr-approver-29463018-c7dtt\" (UID: \"c2ab93fc-e076-411d-9769-d911fd2898b1\") " pod="openshift-infra/auto-csr-approver-29463018-c7dtt" Jan 07 10:18:00 crc kubenswrapper[5131]: I0107 10:18:00.920488 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j26f\" (UniqueName: \"kubernetes.io/projected/c2ab93fc-e076-411d-9769-d911fd2898b1-kube-api-access-6j26f\") pod \"auto-csr-approver-29463018-c7dtt\" (UID: \"c2ab93fc-e076-411d-9769-d911fd2898b1\") " pod="openshift-infra/auto-csr-approver-29463018-c7dtt" Jan 07 10:18:00 crc kubenswrapper[5131]: I0107 10:18:00.960698 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463018-c7dtt" Jan 07 10:18:01 crc kubenswrapper[5131]: I0107 10:18:01.214356 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463018-c7dtt"] Jan 07 10:18:01 crc kubenswrapper[5131]: I0107 10:18:01.642576 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463018-c7dtt" event={"ID":"c2ab93fc-e076-411d-9769-d911fd2898b1","Type":"ContainerStarted","Data":"0730ed012895a0cc0aa7f1f4e275bd1c416264396012e45cd1042df612ba24ed"} Jan 07 10:18:03 crc kubenswrapper[5131]: I0107 10:18:03.667150 5131 generic.go:358] "Generic (PLEG): container finished" podID="c2ab93fc-e076-411d-9769-d911fd2898b1" containerID="7c193104d83fe6f04b986e7f6c25a781348580872df2c12e4eb38bb223caaa46" exitCode=0 Jan 07 10:18:03 crc kubenswrapper[5131]: I0107 10:18:03.667208 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463018-c7dtt" event={"ID":"c2ab93fc-e076-411d-9769-d911fd2898b1","Type":"ContainerDied","Data":"7c193104d83fe6f04b986e7f6c25a781348580872df2c12e4eb38bb223caaa46"} Jan 07 10:18:04 crc kubenswrapper[5131]: I0107 10:18:04.955293 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463018-c7dtt" Jan 07 10:18:04 crc kubenswrapper[5131]: I0107 10:18:04.960026 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j26f\" (UniqueName: \"kubernetes.io/projected/c2ab93fc-e076-411d-9769-d911fd2898b1-kube-api-access-6j26f\") pod \"c2ab93fc-e076-411d-9769-d911fd2898b1\" (UID: \"c2ab93fc-e076-411d-9769-d911fd2898b1\") " Jan 07 10:18:04 crc kubenswrapper[5131]: I0107 10:18:04.967483 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ab93fc-e076-411d-9769-d911fd2898b1-kube-api-access-6j26f" (OuterVolumeSpecName: "kube-api-access-6j26f") pod "c2ab93fc-e076-411d-9769-d911fd2898b1" (UID: "c2ab93fc-e076-411d-9769-d911fd2898b1"). InnerVolumeSpecName "kube-api-access-6j26f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:18:05 crc kubenswrapper[5131]: I0107 10:18:05.061690 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6j26f\" (UniqueName: \"kubernetes.io/projected/c2ab93fc-e076-411d-9769-d911fd2898b1-kube-api-access-6j26f\") on node \"crc\" DevicePath \"\"" Jan 07 10:18:05 crc kubenswrapper[5131]: I0107 10:18:05.682924 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463018-c7dtt" event={"ID":"c2ab93fc-e076-411d-9769-d911fd2898b1","Type":"ContainerDied","Data":"0730ed012895a0cc0aa7f1f4e275bd1c416264396012e45cd1042df612ba24ed"} Jan 07 10:18:05 crc kubenswrapper[5131]: I0107 10:18:05.683250 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0730ed012895a0cc0aa7f1f4e275bd1c416264396012e45cd1042df612ba24ed" Jan 07 10:18:05 crc kubenswrapper[5131]: I0107 10:18:05.682939 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463018-c7dtt" Jan 07 10:18:06 crc kubenswrapper[5131]: I0107 10:18:06.002291 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463012-26ldf"] Jan 07 10:18:06 crc kubenswrapper[5131]: I0107 10:18:06.008340 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463012-26ldf"] Jan 07 10:18:06 crc kubenswrapper[5131]: I0107 10:18:06.189918 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec70975e-67f8-46e9-9c01-3d1050806e82" path="/var/lib/kubelet/pods/ec70975e-67f8-46e9-9c01-3d1050806e82/volumes" Jan 07 10:18:11 crc kubenswrapper[5131]: I0107 10:18:11.181277 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:18:11 crc kubenswrapper[5131]: E0107 10:18:11.182330 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.148632 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xqws8"] Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.149890 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2ab93fc-e076-411d-9769-d911fd2898b1" containerName="oc" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.149910 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ab93fc-e076-411d-9769-d911fd2898b1" containerName="oc" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.150042 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2ab93fc-e076-411d-9769-d911fd2898b1" containerName="oc" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.168941 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xqws8"] Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.169053 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.170862 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.171350 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.172131 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.172388 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.172664 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-jf6kf\"" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.173108 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.173337 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.221152 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hfkc\" (UniqueName: \"kubernetes.io/projected/961ff40e-d41b-4c63-b871-9d8d01acfc9e-kube-api-access-2hfkc\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.221232 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.221286 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.221307 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.221498 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-config\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.221919 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.221980 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-users\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.323569 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-config\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.323659 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.323696 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-users\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.323751 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2hfkc\" (UniqueName: \"kubernetes.io/projected/961ff40e-d41b-4c63-b871-9d8d01acfc9e-kube-api-access-2hfkc\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.323781 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.323860 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.323886 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.326520 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-config\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.333223 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.336307 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.352011 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.352665 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.356562 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-users\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.357599 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hfkc\" (UniqueName: \"kubernetes.io/projected/961ff40e-d41b-4c63-b871-9d8d01acfc9e-kube-api-access-2hfkc\") pod \"default-interconnect-55bf8d5cb-xqws8\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.491245 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:18:16 crc kubenswrapper[5131]: I0107 10:18:16.981873 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xqws8"] Jan 07 10:18:17 crc kubenswrapper[5131]: I0107 10:18:17.785316 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" event={"ID":"961ff40e-d41b-4c63-b871-9d8d01acfc9e","Type":"ContainerStarted","Data":"48503d0611935254848a5b2dcf8b89c5c2622004520b0dbfd4ce276db670aede"} Jan 07 10:18:22 crc kubenswrapper[5131]: I0107 10:18:22.832916 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" event={"ID":"961ff40e-d41b-4c63-b871-9d8d01acfc9e","Type":"ContainerStarted","Data":"a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef"} Jan 07 10:18:22 crc kubenswrapper[5131]: I0107 10:18:22.853140 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" podStartSLOduration=2.152806336 podStartE2EDuration="6.853121754s" podCreationTimestamp="2026-01-07 10:18:16 +0000 UTC" firstStartedPulling="2026-01-07 10:18:16.988068703 +0000 UTC m=+1725.154370267" lastFinishedPulling="2026-01-07 10:18:21.688384121 +0000 UTC m=+1729.854685685" observedRunningTime="2026-01-07 10:18:22.851326379 +0000 UTC m=+1731.017628003" watchObservedRunningTime="2026-01-07 10:18:22.853121754 +0000 UTC m=+1731.019423308" Jan 07 10:18:23 crc kubenswrapper[5131]: I0107 10:18:23.180697 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:18:23 crc kubenswrapper[5131]: E0107 10:18:23.181239 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:18:25 crc kubenswrapper[5131]: I0107 10:18:25.940041 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 07 10:18:25 crc kubenswrapper[5131]: I0107 10:18:25.993247 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 07 10:18:25 crc kubenswrapper[5131]: I0107 10:18:25.993388 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 07 10:18:25 crc kubenswrapper[5131]: I0107 10:18:25.996918 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Jan 07 10:18:25 crc kubenswrapper[5131]: I0107 10:18:25.997702 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Jan 07 10:18:25 crc kubenswrapper[5131]: I0107 10:18:25.997912 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Jan 07 10:18:25 crc kubenswrapper[5131]: I0107 10:18:25.998365 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Jan 07 10:18:25 crc kubenswrapper[5131]: I0107 10:18:25.999397 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-t6fbq\"" Jan 07 10:18:25 crc kubenswrapper[5131]: I0107 10:18:25.999797 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:25.999985 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.000057 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.000090 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.002342 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.078534 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.078591 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-web-config\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.078614 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.078646 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.078849 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.078912 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0e6a5b7c-d782-4d92-ad16-7557ea99d644-config-out\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.079018 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.079085 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6j6q\" (UniqueName: \"kubernetes.io/projected/0e6a5b7c-d782-4d92-ad16-7557ea99d644-kube-api-access-l6j6q\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.079257 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e5fa9aa2-270c-4e44-9410-8a5b2e95c0ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5fa9aa2-270c-4e44-9410-8a5b2e95c0ee\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.079505 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0e6a5b7c-d782-4d92-ad16-7557ea99d644-tls-assets\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.079614 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.079715 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-config\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.181631 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-config\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.181699 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.181724 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-web-config\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.181747 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.181778 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.181810 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.181850 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0e6a5b7c-d782-4d92-ad16-7557ea99d644-config-out\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.181884 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.181911 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6j6q\" (UniqueName: \"kubernetes.io/projected/0e6a5b7c-d782-4d92-ad16-7557ea99d644-kube-api-access-l6j6q\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.181947 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-e5fa9aa2-270c-4e44-9410-8a5b2e95c0ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5fa9aa2-270c-4e44-9410-8a5b2e95c0ee\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.182025 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0e6a5b7c-d782-4d92-ad16-7557ea99d644-tls-assets\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.182065 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: E0107 10:18:26.182431 5131 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 07 10:18:26 crc kubenswrapper[5131]: E0107 10:18:26.182533 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-prometheus-proxy-tls podName:0e6a5b7c-d782-4d92-ad16-7557ea99d644 nodeName:}" failed. No retries permitted until 2026-01-07 10:18:26.682509264 +0000 UTC m=+1734.848810838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "0e6a5b7c-d782-4d92-ad16-7557ea99d644") : secret "default-prometheus-proxy-tls" not found Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.183109 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.184005 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.184236 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.184906 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0e6a5b7c-d782-4d92-ad16-7557ea99d644-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.191141 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0e6a5b7c-d782-4d92-ad16-7557ea99d644-config-out\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.191238 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.192357 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-web-config\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.192523 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-config\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.199442 5131 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.199472 5131 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-e5fa9aa2-270c-4e44-9410-8a5b2e95c0ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5fa9aa2-270c-4e44-9410-8a5b2e95c0ee\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1358494794b20dfec1835fbe09b0f69e9e776621e03b182e40aadae790f0bb2b/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.202507 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6j6q\" (UniqueName: \"kubernetes.io/projected/0e6a5b7c-d782-4d92-ad16-7557ea99d644-kube-api-access-l6j6q\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.203661 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0e6a5b7c-d782-4d92-ad16-7557ea99d644-tls-assets\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.223894 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-e5fa9aa2-270c-4e44-9410-8a5b2e95c0ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5fa9aa2-270c-4e44-9410-8a5b2e95c0ee\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: I0107 10:18:26.691000 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:26 crc kubenswrapper[5131]: E0107 10:18:26.691328 5131 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 07 10:18:26 crc kubenswrapper[5131]: E0107 10:18:26.691500 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-prometheus-proxy-tls podName:0e6a5b7c-d782-4d92-ad16-7557ea99d644 nodeName:}" failed. No retries permitted until 2026-01-07 10:18:27.691457681 +0000 UTC m=+1735.857759285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "0e6a5b7c-d782-4d92-ad16-7557ea99d644") : secret "default-prometheus-proxy-tls" not found Jan 07 10:18:27 crc kubenswrapper[5131]: I0107 10:18:27.706769 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:27 crc kubenswrapper[5131]: I0107 10:18:27.711515 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e6a5b7c-d782-4d92-ad16-7557ea99d644-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0e6a5b7c-d782-4d92-ad16-7557ea99d644\") " pod="service-telemetry/prometheus-default-0" Jan 07 10:18:27 crc kubenswrapper[5131]: I0107 10:18:27.830193 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 07 10:18:28 crc kubenswrapper[5131]: I0107 10:18:28.081556 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 07 10:18:28 crc kubenswrapper[5131]: W0107 10:18:28.084805 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e6a5b7c_d782_4d92_ad16_7557ea99d644.slice/crio-96de7a29d9c9dd5533b2318409af9bd428266335eb3bc4960cf6e3685fd3431f WatchSource:0}: Error finding container 96de7a29d9c9dd5533b2318409af9bd428266335eb3bc4960cf6e3685fd3431f: Status 404 returned error can't find the container with id 96de7a29d9c9dd5533b2318409af9bd428266335eb3bc4960cf6e3685fd3431f Jan 07 10:18:28 crc kubenswrapper[5131]: I0107 10:18:28.903314 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0e6a5b7c-d782-4d92-ad16-7557ea99d644","Type":"ContainerStarted","Data":"96de7a29d9c9dd5533b2318409af9bd428266335eb3bc4960cf6e3685fd3431f"} Jan 07 10:18:33 crc kubenswrapper[5131]: I0107 10:18:33.949581 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0e6a5b7c-d782-4d92-ad16-7557ea99d644","Type":"ContainerStarted","Data":"403459e4ca65198bdc90425ab35423045857cb334f7b5dea5029bee6791f7928"} Jan 07 10:18:34 crc kubenswrapper[5131]: I0107 10:18:34.647271 5131 scope.go:117] "RemoveContainer" containerID="b72377165b1e2e109687a3c5644c7cf63622723ed2ade5378196c0d0d382ba5a" Jan 07 10:18:35 crc kubenswrapper[5131]: I0107 10:18:35.798958 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-spwzd"] Jan 07 10:18:35 crc kubenswrapper[5131]: I0107 10:18:35.813014 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-spwzd"] Jan 07 10:18:35 crc kubenswrapper[5131]: I0107 10:18:35.813171 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-spwzd" Jan 07 10:18:35 crc kubenswrapper[5131]: I0107 10:18:35.947376 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6fb\" (UniqueName: \"kubernetes.io/projected/a2b52004-8840-457e-8c8b-a42110570a94-kube-api-access-4g6fb\") pod \"default-snmp-webhook-694dc457d5-spwzd\" (UID: \"a2b52004-8840-457e-8c8b-a42110570a94\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-spwzd" Jan 07 10:18:36 crc kubenswrapper[5131]: I0107 10:18:36.048718 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4g6fb\" (UniqueName: \"kubernetes.io/projected/a2b52004-8840-457e-8c8b-a42110570a94-kube-api-access-4g6fb\") pod \"default-snmp-webhook-694dc457d5-spwzd\" (UID: \"a2b52004-8840-457e-8c8b-a42110570a94\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-spwzd" Jan 07 10:18:36 crc kubenswrapper[5131]: I0107 10:18:36.076882 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g6fb\" (UniqueName: \"kubernetes.io/projected/a2b52004-8840-457e-8c8b-a42110570a94-kube-api-access-4g6fb\") pod \"default-snmp-webhook-694dc457d5-spwzd\" (UID: \"a2b52004-8840-457e-8c8b-a42110570a94\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-spwzd" Jan 07 10:18:36 crc kubenswrapper[5131]: I0107 10:18:36.142405 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-spwzd" Jan 07 10:18:36 crc kubenswrapper[5131]: I0107 10:18:36.363089 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-spwzd"] Jan 07 10:18:36 crc kubenswrapper[5131]: W0107 10:18:36.366551 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2b52004_8840_457e_8c8b_a42110570a94.slice/crio-7bbacb9a8540cf877e49500666d7e2378a37b1ec4ae5ec0421a4ddb3df974346 WatchSource:0}: Error finding container 7bbacb9a8540cf877e49500666d7e2378a37b1ec4ae5ec0421a4ddb3df974346: Status 404 returned error can't find the container with id 7bbacb9a8540cf877e49500666d7e2378a37b1ec4ae5ec0421a4ddb3df974346 Jan 07 10:18:36 crc kubenswrapper[5131]: I0107 10:18:36.976384 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-spwzd" event={"ID":"a2b52004-8840-457e-8c8b-a42110570a94","Type":"ContainerStarted","Data":"7bbacb9a8540cf877e49500666d7e2378a37b1ec4ae5ec0421a4ddb3df974346"} Jan 07 10:18:38 crc kubenswrapper[5131]: I0107 10:18:38.180207 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:18:38 crc kubenswrapper[5131]: E0107 10:18:38.180675 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.459003 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.503048 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.503222 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.505317 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.507967 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.508021 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.508313 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.508484 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.508667 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-7hd5d\"" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.601504 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-web-config\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.601564 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.601613 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-config-volume\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.601687 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgmlh\" (UniqueName: \"kubernetes.io/projected/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-kube-api-access-sgmlh\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.601868 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.601932 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba967211-d742-4b7a-9f6d-c0913f00be01\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba967211-d742-4b7a-9f6d-c0913f00be01\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.601960 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.601985 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.602021 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-config-out\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.703388 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.703431 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-ba967211-d742-4b7a-9f6d-c0913f00be01\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba967211-d742-4b7a-9f6d-c0913f00be01\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.703525 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.703550 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.703586 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-config-out\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: E0107 10:18:39.703606 5131 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.703620 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-web-config\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: E0107 10:18:39.703721 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls podName:1d78a5b3-b901-4eb9-bf3c-099adf94b65d nodeName:}" failed. No retries permitted until 2026-01-07 10:18:40.203692176 +0000 UTC m=+1748.369993780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "1d78a5b3-b901-4eb9-bf3c-099adf94b65d") : secret "default-alertmanager-proxy-tls" not found Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.703810 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.703916 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-config-volume\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.703999 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sgmlh\" (UniqueName: \"kubernetes.io/projected/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-kube-api-access-sgmlh\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.709007 5131 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.709040 5131 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-ba967211-d742-4b7a-9f6d-c0913f00be01\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba967211-d742-4b7a-9f6d-c0913f00be01\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c38f09eef8652beb095a52e3db5dfac9541bf7c65bea098414c99d9160437699/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.711492 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-web-config\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.718505 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.722751 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-config-out\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.722775 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.722828 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-config-volume\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.724058 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgmlh\" (UniqueName: \"kubernetes.io/projected/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-kube-api-access-sgmlh\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.737145 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:39 crc kubenswrapper[5131]: I0107 10:18:39.747924 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-ba967211-d742-4b7a-9f6d-c0913f00be01\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba967211-d742-4b7a-9f6d-c0913f00be01\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:40 crc kubenswrapper[5131]: I0107 10:18:40.214652 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:40 crc kubenswrapper[5131]: E0107 10:18:40.214902 5131 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 07 10:18:40 crc kubenswrapper[5131]: E0107 10:18:40.215015 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls podName:1d78a5b3-b901-4eb9-bf3c-099adf94b65d nodeName:}" failed. No retries permitted until 2026-01-07 10:18:41.214989622 +0000 UTC m=+1749.381291186 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "1d78a5b3-b901-4eb9-bf3c-099adf94b65d") : secret "default-alertmanager-proxy-tls" not found Jan 07 10:18:41 crc kubenswrapper[5131]: I0107 10:18:41.003544 5131 generic.go:358] "Generic (PLEG): container finished" podID="0e6a5b7c-d782-4d92-ad16-7557ea99d644" containerID="403459e4ca65198bdc90425ab35423045857cb334f7b5dea5029bee6791f7928" exitCode=0 Jan 07 10:18:41 crc kubenswrapper[5131]: I0107 10:18:41.003635 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0e6a5b7c-d782-4d92-ad16-7557ea99d644","Type":"ContainerDied","Data":"403459e4ca65198bdc90425ab35423045857cb334f7b5dea5029bee6791f7928"} Jan 07 10:18:41 crc kubenswrapper[5131]: I0107 10:18:41.229518 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:41 crc kubenswrapper[5131]: E0107 10:18:41.229672 5131 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 07 10:18:41 crc kubenswrapper[5131]: E0107 10:18:41.229774 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls podName:1d78a5b3-b901-4eb9-bf3c-099adf94b65d nodeName:}" failed. No retries permitted until 2026-01-07 10:18:43.229755019 +0000 UTC m=+1751.396056573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "1d78a5b3-b901-4eb9-bf3c-099adf94b65d") : secret "default-alertmanager-proxy-tls" not found Jan 07 10:18:43 crc kubenswrapper[5131]: I0107 10:18:43.023405 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-spwzd" event={"ID":"a2b52004-8840-457e-8c8b-a42110570a94","Type":"ContainerStarted","Data":"5730d9358ac3222d2fd9db637381d4c55dae4c899ab53a58c0dd690de9ecb836"} Jan 07 10:18:43 crc kubenswrapper[5131]: I0107 10:18:43.049343 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-694dc457d5-spwzd" podStartSLOduration=2.056880799 podStartE2EDuration="8.049318241s" podCreationTimestamp="2026-01-07 10:18:35 +0000 UTC" firstStartedPulling="2026-01-07 10:18:36.368737208 +0000 UTC m=+1744.535038772" lastFinishedPulling="2026-01-07 10:18:42.36117465 +0000 UTC m=+1750.527476214" observedRunningTime="2026-01-07 10:18:43.037378634 +0000 UTC m=+1751.203680228" watchObservedRunningTime="2026-01-07 10:18:43.049318241 +0000 UTC m=+1751.215619805" Jan 07 10:18:43 crc kubenswrapper[5131]: I0107 10:18:43.260815 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:43 crc kubenswrapper[5131]: I0107 10:18:43.268508 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d78a5b3-b901-4eb9-bf3c-099adf94b65d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"1d78a5b3-b901-4eb9-bf3c-099adf94b65d\") " pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:43 crc kubenswrapper[5131]: I0107 10:18:43.421025 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 07 10:18:43 crc kubenswrapper[5131]: I0107 10:18:43.628902 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 07 10:18:46 crc kubenswrapper[5131]: I0107 10:18:46.055465 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1d78a5b3-b901-4eb9-bf3c-099adf94b65d","Type":"ContainerStarted","Data":"6f0402c0f6d1c9e5ac7ee2be93d8593309332afe086b0ef274c1197979277505"} Jan 07 10:18:47 crc kubenswrapper[5131]: I0107 10:18:47.065901 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0e6a5b7c-d782-4d92-ad16-7557ea99d644","Type":"ContainerStarted","Data":"7c6efd58511ced4bb8b58a756f10328dab9e0d6713d12baa1249704012ba660a"} Jan 07 10:18:49 crc kubenswrapper[5131]: I0107 10:18:49.082343 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1d78a5b3-b901-4eb9-bf3c-099adf94b65d","Type":"ContainerStarted","Data":"e7f9f8cf06b0078feac218b2df19efcd870a3a4a0eff7bd37d9508693279958e"} Jan 07 10:18:49 crc kubenswrapper[5131]: I0107 10:18:49.086177 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0e6a5b7c-d782-4d92-ad16-7557ea99d644","Type":"ContainerStarted","Data":"c32ec8bb4ab3a885c7fa932032bf0735c2e86a1172ef830d3711cc39bc3991db"} Jan 07 10:18:52 crc kubenswrapper[5131]: I0107 10:18:52.190019 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:18:52 crc kubenswrapper[5131]: E0107 10:18:52.190640 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:18:53 crc kubenswrapper[5131]: I0107 10:18:53.164627 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl"] Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.850560 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.857224 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-wgph7\"" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.857457 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.857979 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.860513 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl"] Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.868807 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.895867 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c3630b93-73b5-4548-b124-2b200e4e5af1-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.895963 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k5nm\" (UniqueName: \"kubernetes.io/projected/c3630b93-73b5-4548-b124-2b200e4e5af1-kube-api-access-9k5nm\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.896075 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c3630b93-73b5-4548-b124-2b200e4e5af1-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.896186 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3630b93-73b5-4548-b124-2b200e4e5af1-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.896226 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c3630b93-73b5-4548-b124-2b200e4e5af1-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.997158 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3630b93-73b5-4548-b124-2b200e4e5af1-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.997508 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c3630b93-73b5-4548-b124-2b200e4e5af1-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.997535 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c3630b93-73b5-4548-b124-2b200e4e5af1-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.998375 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c3630b93-73b5-4548-b124-2b200e4e5af1-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.998789 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9k5nm\" (UniqueName: \"kubernetes.io/projected/c3630b93-73b5-4548-b124-2b200e4e5af1-kube-api-access-9k5nm\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.998896 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c3630b93-73b5-4548-b124-2b200e4e5af1-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:55 crc kubenswrapper[5131]: I0107 10:18:55.999228 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c3630b93-73b5-4548-b124-2b200e4e5af1-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:56 crc kubenswrapper[5131]: I0107 10:18:56.003102 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c3630b93-73b5-4548-b124-2b200e4e5af1-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:56 crc kubenswrapper[5131]: I0107 10:18:56.003602 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3630b93-73b5-4548-b124-2b200e4e5af1-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:56 crc kubenswrapper[5131]: I0107 10:18:56.019123 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k5nm\" (UniqueName: \"kubernetes.io/projected/c3630b93-73b5-4548-b124-2b200e4e5af1-kube-api-access-9k5nm\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl\" (UID: \"c3630b93-73b5-4548-b124-2b200e4e5af1\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:56 crc kubenswrapper[5131]: I0107 10:18:56.174875 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" Jan 07 10:18:56 crc kubenswrapper[5131]: I0107 10:18:56.636689 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl"] Jan 07 10:18:56 crc kubenswrapper[5131]: I0107 10:18:56.690822 5131 generic.go:358] "Generic (PLEG): container finished" podID="1d78a5b3-b901-4eb9-bf3c-099adf94b65d" containerID="e7f9f8cf06b0078feac218b2df19efcd870a3a4a0eff7bd37d9508693279958e" exitCode=0 Jan 07 10:18:56 crc kubenswrapper[5131]: I0107 10:18:56.690866 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1d78a5b3-b901-4eb9-bf3c-099adf94b65d","Type":"ContainerDied","Data":"e7f9f8cf06b0078feac218b2df19efcd870a3a4a0eff7bd37d9508693279958e"} Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.093940 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq"] Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.411995 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq"] Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.412321 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.414480 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.415299 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.521605 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnbb5\" (UniqueName: \"kubernetes.io/projected/4a3682af-b97f-48c1-8364-7708c5442e0c-kube-api-access-nnbb5\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.521733 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4a3682af-b97f-48c1-8364-7708c5442e0c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.521780 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4a3682af-b97f-48c1-8364-7708c5442e0c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.521976 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/4a3682af-b97f-48c1-8364-7708c5442e0c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.522137 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a3682af-b97f-48c1-8364-7708c5442e0c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.623036 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/4a3682af-b97f-48c1-8364-7708c5442e0c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.623108 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a3682af-b97f-48c1-8364-7708c5442e0c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.623142 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nnbb5\" (UniqueName: \"kubernetes.io/projected/4a3682af-b97f-48c1-8364-7708c5442e0c-kube-api-access-nnbb5\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.623179 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4a3682af-b97f-48c1-8364-7708c5442e0c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.623200 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4a3682af-b97f-48c1-8364-7708c5442e0c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: E0107 10:18:57.623336 5131 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 07 10:18:57 crc kubenswrapper[5131]: E0107 10:18:57.623395 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a3682af-b97f-48c1-8364-7708c5442e0c-default-cloud1-ceil-meter-proxy-tls podName:4a3682af-b97f-48c1-8364-7708c5442e0c nodeName:}" failed. No retries permitted until 2026-01-07 10:18:58.123377081 +0000 UTC m=+1766.289678645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/4a3682af-b97f-48c1-8364-7708c5442e0c-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" (UID: "4a3682af-b97f-48c1-8364-7708c5442e0c") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.624117 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a3682af-b97f-48c1-8364-7708c5442e0c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.624263 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4a3682af-b97f-48c1-8364-7708c5442e0c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.630921 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/4a3682af-b97f-48c1-8364-7708c5442e0c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:57 crc kubenswrapper[5131]: I0107 10:18:57.644372 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnbb5\" (UniqueName: \"kubernetes.io/projected/4a3682af-b97f-48c1-8364-7708c5442e0c-kube-api-access-nnbb5\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:58 crc kubenswrapper[5131]: I0107 10:18:58.131629 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4a3682af-b97f-48c1-8364-7708c5442e0c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:58 crc kubenswrapper[5131]: I0107 10:18:58.137966 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4a3682af-b97f-48c1-8364-7708c5442e0c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq\" (UID: \"4a3682af-b97f-48c1-8364-7708c5442e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:58 crc kubenswrapper[5131]: I0107 10:18:58.346658 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" Jan 07 10:18:59 crc kubenswrapper[5131]: W0107 10:18:59.401891 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3630b93_73b5_4548_b124_2b200e4e5af1.slice/crio-1fd2c84d4a51392ee362ef6c4a18edeb27f624ab5758195f031103d6eb2a0723 WatchSource:0}: Error finding container 1fd2c84d4a51392ee362ef6c4a18edeb27f624ab5758195f031103d6eb2a0723: Status 404 returned error can't find the container with id 1fd2c84d4a51392ee362ef6c4a18edeb27f624ab5758195f031103d6eb2a0723 Jan 07 10:18:59 crc kubenswrapper[5131]: I0107 10:18:59.712182 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" event={"ID":"c3630b93-73b5-4548-b124-2b200e4e5af1","Type":"ContainerStarted","Data":"1fd2c84d4a51392ee362ef6c4a18edeb27f624ab5758195f031103d6eb2a0723"} Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.114490 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9"] Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.129256 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9"] Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.129392 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.132920 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.133509 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.269122 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krp2s\" (UniqueName: \"kubernetes.io/projected/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-kube-api-access-krp2s\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.269168 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.269466 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.269519 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.269598 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.371188 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.371250 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.371305 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.371361 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-krp2s\" (UniqueName: \"kubernetes.io/projected/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-kube-api-access-krp2s\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.371392 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: E0107 10:19:00.371410 5131 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 07 10:19:00 crc kubenswrapper[5131]: E0107 10:19:00.371504 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-default-cloud1-sens-meter-proxy-tls podName:a8abec8e-e2e1-4c4b-a2b9-3298e289f101 nodeName:}" failed. No retries permitted until 2026-01-07 10:19:00.871483752 +0000 UTC m=+1769.037785316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" (UID: "a8abec8e-e2e1-4c4b-a2b9-3298e289f101") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.371745 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.372463 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.377600 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.394356 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-krp2s\" (UniqueName: \"kubernetes.io/projected/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-kube-api-access-krp2s\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: I0107 10:19:00.877856 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:00 crc kubenswrapper[5131]: E0107 10:19:00.877995 5131 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 07 10:19:00 crc kubenswrapper[5131]: E0107 10:19:00.878065 5131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-default-cloud1-sens-meter-proxy-tls podName:a8abec8e-e2e1-4c4b-a2b9-3298e289f101 nodeName:}" failed. No retries permitted until 2026-01-07 10:19:01.878050347 +0000 UTC m=+1770.044351911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" (UID: "a8abec8e-e2e1-4c4b-a2b9-3298e289f101") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 07 10:19:01 crc kubenswrapper[5131]: I0107 10:19:01.892083 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:01 crc kubenswrapper[5131]: I0107 10:19:01.901572 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8abec8e-e2e1-4c4b-a2b9-3298e289f101-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9\" (UID: \"a8abec8e-e2e1-4c4b-a2b9-3298e289f101\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:01 crc kubenswrapper[5131]: I0107 10:19:01.944877 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" Jan 07 10:19:02 crc kubenswrapper[5131]: I0107 10:19:02.632619 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq"] Jan 07 10:19:02 crc kubenswrapper[5131]: W0107 10:19:02.648385 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a3682af_b97f_48c1_8364_7708c5442e0c.slice/crio-e8ff8f1df039813cc472d51c0552d5c2b0019d687b27ff3ef84b3fa0d25b91c8 WatchSource:0}: Error finding container e8ff8f1df039813cc472d51c0552d5c2b0019d687b27ff3ef84b3fa0d25b91c8: Status 404 returned error can't find the container with id e8ff8f1df039813cc472d51c0552d5c2b0019d687b27ff3ef84b3fa0d25b91c8 Jan 07 10:19:02 crc kubenswrapper[5131]: I0107 10:19:02.715854 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9"] Jan 07 10:19:02 crc kubenswrapper[5131]: W0107 10:19:02.723202 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8abec8e_e2e1_4c4b_a2b9_3298e289f101.slice/crio-ad2c87de4e890074265850cda227ef7a810fc0972821713ec3aea9c1d830a1e5 WatchSource:0}: Error finding container ad2c87de4e890074265850cda227ef7a810fc0972821713ec3aea9c1d830a1e5: Status 404 returned error can't find the container with id ad2c87de4e890074265850cda227ef7a810fc0972821713ec3aea9c1d830a1e5 Jan 07 10:19:02 crc kubenswrapper[5131]: I0107 10:19:02.740147 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0e6a5b7c-d782-4d92-ad16-7557ea99d644","Type":"ContainerStarted","Data":"db43da4551025f21a5c78aedb4cee6677322dd43e56e47b49495dbfa3e48c547"} Jan 07 10:19:02 crc kubenswrapper[5131]: I0107 10:19:02.741567 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" event={"ID":"a8abec8e-e2e1-4c4b-a2b9-3298e289f101","Type":"ContainerStarted","Data":"ad2c87de4e890074265850cda227ef7a810fc0972821713ec3aea9c1d830a1e5"} Jan 07 10:19:02 crc kubenswrapper[5131]: I0107 10:19:02.743722 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" event={"ID":"4a3682af-b97f-48c1-8364-7708c5442e0c","Type":"ContainerStarted","Data":"e8ff8f1df039813cc472d51c0552d5c2b0019d687b27ff3ef84b3fa0d25b91c8"} Jan 07 10:19:02 crc kubenswrapper[5131]: I0107 10:19:02.744992 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" event={"ID":"c3630b93-73b5-4548-b124-2b200e4e5af1","Type":"ContainerStarted","Data":"d2c546e05270c4290d575ea855132eec13e89c78786e155973abf86f7deafc19"} Jan 07 10:19:02 crc kubenswrapper[5131]: I0107 10:19:02.775323 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.568375555 podStartE2EDuration="38.775300434s" podCreationTimestamp="2026-01-07 10:18:24 +0000 UTC" firstStartedPulling="2026-01-07 10:18:28.086964051 +0000 UTC m=+1736.253265615" lastFinishedPulling="2026-01-07 10:19:02.29388893 +0000 UTC m=+1770.460190494" observedRunningTime="2026-01-07 10:19:02.758995732 +0000 UTC m=+1770.925297316" watchObservedRunningTime="2026-01-07 10:19:02.775300434 +0000 UTC m=+1770.941601998" Jan 07 10:19:02 crc kubenswrapper[5131]: I0107 10:19:02.830888 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Jan 07 10:19:04 crc kubenswrapper[5131]: I0107 10:19:04.179887 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:19:04 crc kubenswrapper[5131]: E0107 10:19:04.180457 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:19:04 crc kubenswrapper[5131]: I0107 10:19:04.779535 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" event={"ID":"a8abec8e-e2e1-4c4b-a2b9-3298e289f101","Type":"ContainerStarted","Data":"fa41811f988e337500c682a03781da881eafc4d7e83e8026ed505f1402d7514a"} Jan 07 10:19:04 crc kubenswrapper[5131]: I0107 10:19:04.785038 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" event={"ID":"4a3682af-b97f-48c1-8364-7708c5442e0c","Type":"ContainerStarted","Data":"c1705a665a0fe0d03bd143f7d9aa2e383813cee314760e54233d6d54a07d5afb"} Jan 07 10:19:05 crc kubenswrapper[5131]: I0107 10:19:05.793874 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1d78a5b3-b901-4eb9-bf3c-099adf94b65d","Type":"ContainerStarted","Data":"db0f28990939394b376a734dc3d9d63a32cadf477d80f036d08f627126ea5f58"} Jan 07 10:19:06 crc kubenswrapper[5131]: I0107 10:19:06.802271 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1d78a5b3-b901-4eb9-bf3c-099adf94b65d","Type":"ContainerStarted","Data":"a6c06fe77a7db469bd57cf9750f0f96d468461e649809dbb96dbd1204a83eb13"} Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.572778 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr"] Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.596170 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr"] Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.596241 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.598475 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.601803 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.683956 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2fd50338-e2ff-4265-a53d-62252487438d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.684013 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/2fd50338-e2ff-4265-a53d-62252487438d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.684103 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2fd50338-e2ff-4265-a53d-62252487438d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.684158 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt56b\" (UniqueName: \"kubernetes.io/projected/2fd50338-e2ff-4265-a53d-62252487438d-kube-api-access-wt56b\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.785956 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wt56b\" (UniqueName: \"kubernetes.io/projected/2fd50338-e2ff-4265-a53d-62252487438d-kube-api-access-wt56b\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.786313 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2fd50338-e2ff-4265-a53d-62252487438d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.786361 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/2fd50338-e2ff-4265-a53d-62252487438d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.786422 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2fd50338-e2ff-4265-a53d-62252487438d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.787394 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2fd50338-e2ff-4265-a53d-62252487438d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.787964 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2fd50338-e2ff-4265-a53d-62252487438d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.801026 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/2fd50338-e2ff-4265-a53d-62252487438d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.802644 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt56b\" (UniqueName: \"kubernetes.io/projected/2fd50338-e2ff-4265-a53d-62252487438d-kube-api-access-wt56b\") pod \"default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr\" (UID: \"2fd50338-e2ff-4265-a53d-62252487438d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:07 crc kubenswrapper[5131]: I0107 10:19:07.918735 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" Jan 07 10:19:08 crc kubenswrapper[5131]: I0107 10:19:08.747436 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv"] Jan 07 10:19:08 crc kubenswrapper[5131]: I0107 10:19:08.759476 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv"] Jan 07 10:19:08 crc kubenswrapper[5131]: I0107 10:19:08.759667 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:08 crc kubenswrapper[5131]: I0107 10:19:08.762383 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Jan 07 10:19:08 crc kubenswrapper[5131]: I0107 10:19:08.907403 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9phgw\" (UniqueName: \"kubernetes.io/projected/98eab5a3-19fb-4e74-808d-26b8c315d76e-kube-api-access-9phgw\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:08 crc kubenswrapper[5131]: I0107 10:19:08.907452 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/98eab5a3-19fb-4e74-808d-26b8c315d76e-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:08 crc kubenswrapper[5131]: I0107 10:19:08.907485 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/98eab5a3-19fb-4e74-808d-26b8c315d76e-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:08 crc kubenswrapper[5131]: I0107 10:19:08.907699 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/98eab5a3-19fb-4e74-808d-26b8c315d76e-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:09 crc kubenswrapper[5131]: I0107 10:19:09.008523 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/98eab5a3-19fb-4e74-808d-26b8c315d76e-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:09 crc kubenswrapper[5131]: I0107 10:19:09.008613 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/98eab5a3-19fb-4e74-808d-26b8c315d76e-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:09 crc kubenswrapper[5131]: I0107 10:19:09.008656 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9phgw\" (UniqueName: \"kubernetes.io/projected/98eab5a3-19fb-4e74-808d-26b8c315d76e-kube-api-access-9phgw\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:09 crc kubenswrapper[5131]: I0107 10:19:09.008677 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/98eab5a3-19fb-4e74-808d-26b8c315d76e-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:09 crc kubenswrapper[5131]: I0107 10:19:09.009378 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/98eab5a3-19fb-4e74-808d-26b8c315d76e-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:09 crc kubenswrapper[5131]: I0107 10:19:09.009531 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/98eab5a3-19fb-4e74-808d-26b8c315d76e-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:09 crc kubenswrapper[5131]: I0107 10:19:09.022294 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/98eab5a3-19fb-4e74-808d-26b8c315d76e-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:09 crc kubenswrapper[5131]: I0107 10:19:09.034677 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9phgw\" (UniqueName: \"kubernetes.io/projected/98eab5a3-19fb-4e74-808d-26b8c315d76e-kube-api-access-9phgw\") pod \"default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv\" (UID: \"98eab5a3-19fb-4e74-808d-26b8c315d76e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:09 crc kubenswrapper[5131]: I0107 10:19:09.095416 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" Jan 07 10:19:10 crc kubenswrapper[5131]: I0107 10:19:10.212791 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr"] Jan 07 10:19:10 crc kubenswrapper[5131]: I0107 10:19:10.325668 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv"] Jan 07 10:19:10 crc kubenswrapper[5131]: I0107 10:19:10.844702 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"1d78a5b3-b901-4eb9-bf3c-099adf94b65d","Type":"ContainerStarted","Data":"947a5c1aff03997fac9d91f5ed77746b8ce28f909eb4c669797b6f448d5b37d8"} Jan 07 10:19:10 crc kubenswrapper[5131]: I0107 10:19:10.847174 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" event={"ID":"a8abec8e-e2e1-4c4b-a2b9-3298e289f101","Type":"ContainerStarted","Data":"d56bdcda61bf32a4f80f6cb2ac06ea9503ec9de1e2ca4f29c3a1a79506a4f0f1"} Jan 07 10:19:10 crc kubenswrapper[5131]: I0107 10:19:10.849340 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" event={"ID":"98eab5a3-19fb-4e74-808d-26b8c315d76e","Type":"ContainerStarted","Data":"1869f47e856c1becb9946dfe8968042d7ead8c8eb4a166565c1e966f6349d97f"} Jan 07 10:19:10 crc kubenswrapper[5131]: I0107 10:19:10.849376 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" event={"ID":"98eab5a3-19fb-4e74-808d-26b8c315d76e","Type":"ContainerStarted","Data":"509e9516098b8342045fbb7b9b34df4f2235cb9c8b9ee2cd31010f43f938fc77"} Jan 07 10:19:10 crc kubenswrapper[5131]: I0107 10:19:10.852014 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" event={"ID":"4a3682af-b97f-48c1-8364-7708c5442e0c","Type":"ContainerStarted","Data":"802c2b619cb8f6a52f07103e25f13348edc80d5df3d12bc171dea9a6b98b0a3f"} Jan 07 10:19:10 crc kubenswrapper[5131]: I0107 10:19:10.853613 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" event={"ID":"c3630b93-73b5-4548-b124-2b200e4e5af1","Type":"ContainerStarted","Data":"ff819bf8caaa01f5927084b067942f1a1c492d0e6e58e1fdb47d7ae3fd7f8618"} Jan 07 10:19:10 crc kubenswrapper[5131]: I0107 10:19:10.855983 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" event={"ID":"2fd50338-e2ff-4265-a53d-62252487438d","Type":"ContainerStarted","Data":"aae0ea34892194aa04cdef0aefac92ac444f822e90c25b4bc249772cfd485cfb"} Jan 07 10:19:10 crc kubenswrapper[5131]: I0107 10:19:10.856013 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" event={"ID":"2fd50338-e2ff-4265-a53d-62252487438d","Type":"ContainerStarted","Data":"a6a31c9745209839ae5efda69fdb1690691fe41900bcdb395b38e64571e68889"} Jan 07 10:19:10 crc kubenswrapper[5131]: I0107 10:19:10.875236 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=19.689099933 podStartE2EDuration="32.875220111s" podCreationTimestamp="2026-01-07 10:18:38 +0000 UTC" firstStartedPulling="2026-01-07 10:18:56.691729142 +0000 UTC m=+1764.858030706" lastFinishedPulling="2026-01-07 10:19:09.87784932 +0000 UTC m=+1778.044150884" observedRunningTime="2026-01-07 10:19:10.865957582 +0000 UTC m=+1779.032259166" watchObservedRunningTime="2026-01-07 10:19:10.875220111 +0000 UTC m=+1779.041521675" Jan 07 10:19:12 crc kubenswrapper[5131]: I0107 10:19:12.830871 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 07 10:19:12 crc kubenswrapper[5131]: I0107 10:19:12.875344 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 07 10:19:12 crc kubenswrapper[5131]: I0107 10:19:12.926354 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 07 10:19:14 crc kubenswrapper[5131]: I0107 10:19:14.888901 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" event={"ID":"4a3682af-b97f-48c1-8364-7708c5442e0c","Type":"ContainerStarted","Data":"5933608e8ac35f455d492ed4d5f9ed45cb4cac81d6bb228da04cf50e926de74c"} Jan 07 10:19:14 crc kubenswrapper[5131]: I0107 10:19:14.892724 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" event={"ID":"c3630b93-73b5-4548-b124-2b200e4e5af1","Type":"ContainerStarted","Data":"aec6b37e11157159d49f23f56c07112d799a57a7965cf50cf233c634e50f3453"} Jan 07 10:19:14 crc kubenswrapper[5131]: I0107 10:19:14.896056 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" event={"ID":"2fd50338-e2ff-4265-a53d-62252487438d","Type":"ContainerStarted","Data":"6ed1413a6d9a040501dc08eeef5119ef6c264ea7e81e195653cb89b227042a6b"} Jan 07 10:19:14 crc kubenswrapper[5131]: I0107 10:19:14.899181 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" event={"ID":"a8abec8e-e2e1-4c4b-a2b9-3298e289f101","Type":"ContainerStarted","Data":"8fce4d6cb554d826d8562c7fe29a608c701cd7014fe872048e118c1dcccce22d"} Jan 07 10:19:14 crc kubenswrapper[5131]: I0107 10:19:14.901688 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" event={"ID":"98eab5a3-19fb-4e74-808d-26b8c315d76e","Type":"ContainerStarted","Data":"33c9a5eec77775e50ccca86c622452bfa8dfb4d1ed23d68fc7a92656de5f6bbe"} Jan 07 10:19:14 crc kubenswrapper[5131]: I0107 10:19:14.917451 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" podStartSLOduration=6.433479692 podStartE2EDuration="17.917432249s" podCreationTimestamp="2026-01-07 10:18:57 +0000 UTC" firstStartedPulling="2026-01-07 10:19:02.650618086 +0000 UTC m=+1770.816919650" lastFinishedPulling="2026-01-07 10:19:14.134570643 +0000 UTC m=+1782.300872207" observedRunningTime="2026-01-07 10:19:14.916763203 +0000 UTC m=+1783.083064767" watchObservedRunningTime="2026-01-07 10:19:14.917432249 +0000 UTC m=+1783.083733813" Jan 07 10:19:14 crc kubenswrapper[5131]: I0107 10:19:14.973822 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" podStartSLOduration=3.595769089 podStartE2EDuration="14.973803451s" podCreationTimestamp="2026-01-07 10:19:00 +0000 UTC" firstStartedPulling="2026-01-07 10:19:02.727247638 +0000 UTC m=+1770.893549202" lastFinishedPulling="2026-01-07 10:19:14.105282 +0000 UTC m=+1782.271583564" observedRunningTime="2026-01-07 10:19:14.967973657 +0000 UTC m=+1783.134275221" watchObservedRunningTime="2026-01-07 10:19:14.973803451 +0000 UTC m=+1783.140105015" Jan 07 10:19:14 crc kubenswrapper[5131]: I0107 10:19:14.975850 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" podStartSLOduration=3.9888786769999998 podStartE2EDuration="7.975841811s" podCreationTimestamp="2026-01-07 10:19:07 +0000 UTC" firstStartedPulling="2026-01-07 10:19:10.225804879 +0000 UTC m=+1778.392106443" lastFinishedPulling="2026-01-07 10:19:14.212768013 +0000 UTC m=+1782.379069577" observedRunningTime="2026-01-07 10:19:14.950708531 +0000 UTC m=+1783.117010105" watchObservedRunningTime="2026-01-07 10:19:14.975841811 +0000 UTC m=+1783.142143375" Jan 07 10:19:14 crc kubenswrapper[5131]: I0107 10:19:14.985585 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" podStartSLOduration=7.317712995 podStartE2EDuration="21.985569821s" podCreationTimestamp="2026-01-07 10:18:53 +0000 UTC" firstStartedPulling="2026-01-07 10:18:59.412612301 +0000 UTC m=+1767.578913865" lastFinishedPulling="2026-01-07 10:19:14.080469127 +0000 UTC m=+1782.246770691" observedRunningTime="2026-01-07 10:19:14.984423093 +0000 UTC m=+1783.150724667" watchObservedRunningTime="2026-01-07 10:19:14.985569821 +0000 UTC m=+1783.151871375" Jan 07 10:19:19 crc kubenswrapper[5131]: I0107 10:19:19.180340 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:19:19 crc kubenswrapper[5131]: E0107 10:19:19.180925 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:19:19 crc kubenswrapper[5131]: I0107 10:19:19.990125 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" podStartSLOduration=8.166238577 podStartE2EDuration="11.990105055s" podCreationTimestamp="2026-01-07 10:19:08 +0000 UTC" firstStartedPulling="2026-01-07 10:19:10.332182385 +0000 UTC m=+1778.498483949" lastFinishedPulling="2026-01-07 10:19:14.156048853 +0000 UTC m=+1782.322350427" observedRunningTime="2026-01-07 10:19:15.011103812 +0000 UTC m=+1783.177405376" watchObservedRunningTime="2026-01-07 10:19:19.990105055 +0000 UTC m=+1788.156406629" Jan 07 10:19:19 crc kubenswrapper[5131]: I0107 10:19:19.995913 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xqws8"] Jan 07 10:19:19 crc kubenswrapper[5131]: I0107 10:19:19.996224 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" podUID="961ff40e-d41b-4c63-b871-9d8d01acfc9e" containerName="default-interconnect" containerID="cri-o://a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef" gracePeriod=30 Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.421021 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.463285 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-86v7n"] Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.464201 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="961ff40e-d41b-4c63-b871-9d8d01acfc9e" containerName="default-interconnect" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.464227 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="961ff40e-d41b-4c63-b871-9d8d01acfc9e" containerName="default-interconnect" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.464528 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="961ff40e-d41b-4c63-b871-9d8d01acfc9e" containerName="default-interconnect" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.470296 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.478527 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-86v7n"] Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503072 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-credentials\") pod \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503115 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hfkc\" (UniqueName: \"kubernetes.io/projected/961ff40e-d41b-4c63-b871-9d8d01acfc9e-kube-api-access-2hfkc\") pod \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503205 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-config\") pod \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503225 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-ca\") pod \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503255 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-ca\") pod \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503274 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-users\") pod \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503307 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-credentials\") pod \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\" (UID: \"961ff40e-d41b-4c63-b871-9d8d01acfc9e\") " Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503396 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-sasl-users\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503438 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503473 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/524128f4-0e28-42f6-8701-6c692b44e3f6-sasl-config\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503493 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503547 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbwc8\" (UniqueName: \"kubernetes.io/projected/524128f4-0e28-42f6-8701-6c692b44e3f6-kube-api-access-xbwc8\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503565 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.503588 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.508909 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "961ff40e-d41b-4c63-b871-9d8d01acfc9e" (UID: "961ff40e-d41b-4c63-b871-9d8d01acfc9e"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.509411 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "961ff40e-d41b-4c63-b871-9d8d01acfc9e" (UID: "961ff40e-d41b-4c63-b871-9d8d01acfc9e"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.512075 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/961ff40e-d41b-4c63-b871-9d8d01acfc9e-kube-api-access-2hfkc" (OuterVolumeSpecName: "kube-api-access-2hfkc") pod "961ff40e-d41b-4c63-b871-9d8d01acfc9e" (UID: "961ff40e-d41b-4c63-b871-9d8d01acfc9e"). InnerVolumeSpecName "kube-api-access-2hfkc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.512721 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "961ff40e-d41b-4c63-b871-9d8d01acfc9e" (UID: "961ff40e-d41b-4c63-b871-9d8d01acfc9e"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.518482 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "961ff40e-d41b-4c63-b871-9d8d01acfc9e" (UID: "961ff40e-d41b-4c63-b871-9d8d01acfc9e"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.528095 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "961ff40e-d41b-4c63-b871-9d8d01acfc9e" (UID: "961ff40e-d41b-4c63-b871-9d8d01acfc9e"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.535990 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "961ff40e-d41b-4c63-b871-9d8d01acfc9e" (UID: "961ff40e-d41b-4c63-b871-9d8d01acfc9e"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604302 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/524128f4-0e28-42f6-8701-6c692b44e3f6-sasl-config\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604362 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604440 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xbwc8\" (UniqueName: \"kubernetes.io/projected/524128f4-0e28-42f6-8701-6c692b44e3f6-kube-api-access-xbwc8\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604472 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604505 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604558 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-sasl-users\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604614 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604693 5131 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604715 5131 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604728 5131 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604741 5131 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604755 5131 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604768 5131 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/961ff40e-d41b-4c63-b871-9d8d01acfc9e-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.604781 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2hfkc\" (UniqueName: \"kubernetes.io/projected/961ff40e-d41b-4c63-b871-9d8d01acfc9e-kube-api-access-2hfkc\") on node \"crc\" DevicePath \"\"" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.605330 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/524128f4-0e28-42f6-8701-6c692b44e3f6-sasl-config\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.609679 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.609683 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.609959 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.609989 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-sasl-users\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.610435 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/524128f4-0e28-42f6-8701-6c692b44e3f6-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.627570 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbwc8\" (UniqueName: \"kubernetes.io/projected/524128f4-0e28-42f6-8701-6c692b44e3f6-kube-api-access-xbwc8\") pod \"default-interconnect-55bf8d5cb-86v7n\" (UID: \"524128f4-0e28-42f6-8701-6c692b44e3f6\") " pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.793231 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.943269 5131 generic.go:358] "Generic (PLEG): container finished" podID="961ff40e-d41b-4c63-b871-9d8d01acfc9e" containerID="a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef" exitCode=0 Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.943420 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.944662 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" event={"ID":"961ff40e-d41b-4c63-b871-9d8d01acfc9e","Type":"ContainerDied","Data":"a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef"} Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.944733 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-xqws8" event={"ID":"961ff40e-d41b-4c63-b871-9d8d01acfc9e","Type":"ContainerDied","Data":"48503d0611935254848a5b2dcf8b89c5c2622004520b0dbfd4ce276db670aede"} Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.944760 5131 scope.go:117] "RemoveContainer" containerID="a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.947634 5131 generic.go:358] "Generic (PLEG): container finished" podID="a8abec8e-e2e1-4c4b-a2b9-3298e289f101" containerID="d56bdcda61bf32a4f80f6cb2ac06ea9503ec9de1e2ca4f29c3a1a79506a4f0f1" exitCode=0 Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.947755 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" event={"ID":"a8abec8e-e2e1-4c4b-a2b9-3298e289f101","Type":"ContainerDied","Data":"d56bdcda61bf32a4f80f6cb2ac06ea9503ec9de1e2ca4f29c3a1a79506a4f0f1"} Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.948391 5131 scope.go:117] "RemoveContainer" containerID="d56bdcda61bf32a4f80f6cb2ac06ea9503ec9de1e2ca4f29c3a1a79506a4f0f1" Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.971633 5131 generic.go:358] "Generic (PLEG): container finished" podID="98eab5a3-19fb-4e74-808d-26b8c315d76e" containerID="1869f47e856c1becb9946dfe8968042d7ead8c8eb4a166565c1e966f6349d97f" exitCode=0 Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.971786 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" event={"ID":"98eab5a3-19fb-4e74-808d-26b8c315d76e","Type":"ContainerDied","Data":"1869f47e856c1becb9946dfe8968042d7ead8c8eb4a166565c1e966f6349d97f"} Jan 07 10:19:20 crc kubenswrapper[5131]: I0107 10:19:20.972197 5131 scope.go:117] "RemoveContainer" containerID="1869f47e856c1becb9946dfe8968042d7ead8c8eb4a166565c1e966f6349d97f" Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:20.998245 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xqws8"] Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.020750 5131 scope.go:117] "RemoveContainer" containerID="a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef" Jan 07 10:19:21 crc kubenswrapper[5131]: E0107 10:19:21.021630 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef\": container with ID starting with a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef not found: ID does not exist" containerID="a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef" Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.021802 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef"} err="failed to get container status \"a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef\": rpc error: code = NotFound desc = could not find container \"a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef\": container with ID starting with a1fa7e3ebf193de8c5e9aa0749c04fd3ac50d0a53ffbc5d46af894ce10a19eef not found: ID does not exist" Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.022038 5131 generic.go:358] "Generic (PLEG): container finished" podID="4a3682af-b97f-48c1-8364-7708c5442e0c" containerID="802c2b619cb8f6a52f07103e25f13348edc80d5df3d12bc171dea9a6b98b0a3f" exitCode=0 Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.022238 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" event={"ID":"4a3682af-b97f-48c1-8364-7708c5442e0c","Type":"ContainerDied","Data":"802c2b619cb8f6a52f07103e25f13348edc80d5df3d12bc171dea9a6b98b0a3f"} Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.022991 5131 scope.go:117] "RemoveContainer" containerID="802c2b619cb8f6a52f07103e25f13348edc80d5df3d12bc171dea9a6b98b0a3f" Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.039712 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-xqws8"] Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.057880 5131 generic.go:358] "Generic (PLEG): container finished" podID="c3630b93-73b5-4548-b124-2b200e4e5af1" containerID="ff819bf8caaa01f5927084b067942f1a1c492d0e6e58e1fdb47d7ae3fd7f8618" exitCode=0 Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.058333 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" event={"ID":"c3630b93-73b5-4548-b124-2b200e4e5af1","Type":"ContainerDied","Data":"ff819bf8caaa01f5927084b067942f1a1c492d0e6e58e1fdb47d7ae3fd7f8618"} Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.059319 5131 scope.go:117] "RemoveContainer" containerID="ff819bf8caaa01f5927084b067942f1a1c492d0e6e58e1fdb47d7ae3fd7f8618" Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.063551 5131 generic.go:358] "Generic (PLEG): container finished" podID="2fd50338-e2ff-4265-a53d-62252487438d" containerID="aae0ea34892194aa04cdef0aefac92ac444f822e90c25b4bc249772cfd485cfb" exitCode=0 Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.063677 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" event={"ID":"2fd50338-e2ff-4265-a53d-62252487438d","Type":"ContainerDied","Data":"aae0ea34892194aa04cdef0aefac92ac444f822e90c25b4bc249772cfd485cfb"} Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.064173 5131 scope.go:117] "RemoveContainer" containerID="aae0ea34892194aa04cdef0aefac92ac444f822e90c25b4bc249772cfd485cfb" Jan 07 10:19:21 crc kubenswrapper[5131]: I0107 10:19:21.105670 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-86v7n"] Jan 07 10:19:22 crc kubenswrapper[5131]: I0107 10:19:22.073182 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" event={"ID":"4a3682af-b97f-48c1-8364-7708c5442e0c","Type":"ContainerStarted","Data":"cea809885fb8572ffc8b8ddd543c055d9b085469c3f8a8ae18c75d72b8fa1e00"} Jan 07 10:19:22 crc kubenswrapper[5131]: I0107 10:19:22.076286 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" event={"ID":"c3630b93-73b5-4548-b124-2b200e4e5af1","Type":"ContainerStarted","Data":"6cb917bdaa13fce39aaa9bda8598729627682d49f9edd76160f2fdeed084b82b"} Jan 07 10:19:22 crc kubenswrapper[5131]: I0107 10:19:22.079254 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" event={"ID":"2fd50338-e2ff-4265-a53d-62252487438d","Type":"ContainerStarted","Data":"97ddd0503fd581c09536a308df02aa6315528224ec659ea44876d7c189bd286d"} Jan 07 10:19:22 crc kubenswrapper[5131]: I0107 10:19:22.081059 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" event={"ID":"524128f4-0e28-42f6-8701-6c692b44e3f6","Type":"ContainerStarted","Data":"0560150a29b55714d203e1ec23c4fdfaee31e4c8de0ca504c3beead86e497fde"} Jan 07 10:19:22 crc kubenswrapper[5131]: I0107 10:19:22.081107 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" event={"ID":"524128f4-0e28-42f6-8701-6c692b44e3f6","Type":"ContainerStarted","Data":"06a26943d3fb8ef5958714fab0466a4b7d0236b9f07603296fd2900455a3e619"} Jan 07 10:19:22 crc kubenswrapper[5131]: I0107 10:19:22.084537 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" event={"ID":"a8abec8e-e2e1-4c4b-a2b9-3298e289f101","Type":"ContainerStarted","Data":"1246e602053e84f2bcbd1b0eceeea6e595103fecf42ab7f6e92158000b44cad8"} Jan 07 10:19:22 crc kubenswrapper[5131]: I0107 10:19:22.088013 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" event={"ID":"98eab5a3-19fb-4e74-808d-26b8c315d76e","Type":"ContainerStarted","Data":"f9894b32a06f19e16f9c01c688f2ec96e51d05602a51679f2e698ac75168de86"} Jan 07 10:19:22 crc kubenswrapper[5131]: I0107 10:19:22.165748 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-86v7n" podStartSLOduration=3.164816961 podStartE2EDuration="3.164816961s" podCreationTimestamp="2026-01-07 10:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-07 10:19:22.160518975 +0000 UTC m=+1790.326820539" watchObservedRunningTime="2026-01-07 10:19:22.164816961 +0000 UTC m=+1790.331118525" Jan 07 10:19:22 crc kubenswrapper[5131]: I0107 10:19:22.193124 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="961ff40e-d41b-4c63-b871-9d8d01acfc9e" path="/var/lib/kubelet/pods/961ff40e-d41b-4c63-b871-9d8d01acfc9e/volumes" Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.103023 5131 generic.go:358] "Generic (PLEG): container finished" podID="a8abec8e-e2e1-4c4b-a2b9-3298e289f101" containerID="1246e602053e84f2bcbd1b0eceeea6e595103fecf42ab7f6e92158000b44cad8" exitCode=0 Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.103117 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" event={"ID":"a8abec8e-e2e1-4c4b-a2b9-3298e289f101","Type":"ContainerDied","Data":"1246e602053e84f2bcbd1b0eceeea6e595103fecf42ab7f6e92158000b44cad8"} Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.103147 5131 scope.go:117] "RemoveContainer" containerID="d56bdcda61bf32a4f80f6cb2ac06ea9503ec9de1e2ca4f29c3a1a79506a4f0f1" Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.103857 5131 scope.go:117] "RemoveContainer" containerID="1246e602053e84f2bcbd1b0eceeea6e595103fecf42ab7f6e92158000b44cad8" Jan 07 10:19:23 crc kubenswrapper[5131]: E0107 10:19:23.104225 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9_service-telemetry(a8abec8e-e2e1-4c4b-a2b9-3298e289f101)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" podUID="a8abec8e-e2e1-4c4b-a2b9-3298e289f101" Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.106068 5131 generic.go:358] "Generic (PLEG): container finished" podID="98eab5a3-19fb-4e74-808d-26b8c315d76e" containerID="f9894b32a06f19e16f9c01c688f2ec96e51d05602a51679f2e698ac75168de86" exitCode=0 Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.106250 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" event={"ID":"98eab5a3-19fb-4e74-808d-26b8c315d76e","Type":"ContainerDied","Data":"f9894b32a06f19e16f9c01c688f2ec96e51d05602a51679f2e698ac75168de86"} Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.106732 5131 scope.go:117] "RemoveContainer" containerID="f9894b32a06f19e16f9c01c688f2ec96e51d05602a51679f2e698ac75168de86" Jan 07 10:19:23 crc kubenswrapper[5131]: E0107 10:19:23.107097 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv_service-telemetry(98eab5a3-19fb-4e74-808d-26b8c315d76e)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" podUID="98eab5a3-19fb-4e74-808d-26b8c315d76e" Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.111072 5131 generic.go:358] "Generic (PLEG): container finished" podID="4a3682af-b97f-48c1-8364-7708c5442e0c" containerID="cea809885fb8572ffc8b8ddd543c055d9b085469c3f8a8ae18c75d72b8fa1e00" exitCode=0 Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.111203 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" event={"ID":"4a3682af-b97f-48c1-8364-7708c5442e0c","Type":"ContainerDied","Data":"cea809885fb8572ffc8b8ddd543c055d9b085469c3f8a8ae18c75d72b8fa1e00"} Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.111689 5131 scope.go:117] "RemoveContainer" containerID="cea809885fb8572ffc8b8ddd543c055d9b085469c3f8a8ae18c75d72b8fa1e00" Jan 07 10:19:23 crc kubenswrapper[5131]: E0107 10:19:23.111956 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq_service-telemetry(4a3682af-b97f-48c1-8364-7708c5442e0c)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" podUID="4a3682af-b97f-48c1-8364-7708c5442e0c" Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.121817 5131 generic.go:358] "Generic (PLEG): container finished" podID="c3630b93-73b5-4548-b124-2b200e4e5af1" containerID="6cb917bdaa13fce39aaa9bda8598729627682d49f9edd76160f2fdeed084b82b" exitCode=0 Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.121924 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" event={"ID":"c3630b93-73b5-4548-b124-2b200e4e5af1","Type":"ContainerDied","Data":"6cb917bdaa13fce39aaa9bda8598729627682d49f9edd76160f2fdeed084b82b"} Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.122350 5131 scope.go:117] "RemoveContainer" containerID="6cb917bdaa13fce39aaa9bda8598729627682d49f9edd76160f2fdeed084b82b" Jan 07 10:19:23 crc kubenswrapper[5131]: E0107 10:19:23.122591 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl_service-telemetry(c3630b93-73b5-4548-b124-2b200e4e5af1)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" podUID="c3630b93-73b5-4548-b124-2b200e4e5af1" Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.140132 5131 generic.go:358] "Generic (PLEG): container finished" podID="2fd50338-e2ff-4265-a53d-62252487438d" containerID="97ddd0503fd581c09536a308df02aa6315528224ec659ea44876d7c189bd286d" exitCode=0 Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.140391 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" event={"ID":"2fd50338-e2ff-4265-a53d-62252487438d","Type":"ContainerDied","Data":"97ddd0503fd581c09536a308df02aa6315528224ec659ea44876d7c189bd286d"} Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.141273 5131 scope.go:117] "RemoveContainer" containerID="97ddd0503fd581c09536a308df02aa6315528224ec659ea44876d7c189bd286d" Jan 07 10:19:23 crc kubenswrapper[5131]: E0107 10:19:23.141566 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr_service-telemetry(2fd50338-e2ff-4265-a53d-62252487438d)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" podUID="2fd50338-e2ff-4265-a53d-62252487438d" Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.150204 5131 scope.go:117] "RemoveContainer" containerID="1869f47e856c1becb9946dfe8968042d7ead8c8eb4a166565c1e966f6349d97f" Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.205016 5131 scope.go:117] "RemoveContainer" containerID="802c2b619cb8f6a52f07103e25f13348edc80d5df3d12bc171dea9a6b98b0a3f" Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.251143 5131 scope.go:117] "RemoveContainer" containerID="ff819bf8caaa01f5927084b067942f1a1c492d0e6e58e1fdb47d7ae3fd7f8618" Jan 07 10:19:23 crc kubenswrapper[5131]: I0107 10:19:23.290735 5131 scope.go:117] "RemoveContainer" containerID="aae0ea34892194aa04cdef0aefac92ac444f822e90c25b4bc249772cfd485cfb" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.120483 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.129001 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.131011 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.131411 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.132035 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.193666 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrmzx\" (UniqueName: \"kubernetes.io/projected/68099886-8bb2-4d13-971f-878abe2cab6c-kube-api-access-hrmzx\") pod \"qdr-test\" (UID: \"68099886-8bb2-4d13-971f-878abe2cab6c\") " pod="service-telemetry/qdr-test" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.193714 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/68099886-8bb2-4d13-971f-878abe2cab6c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"68099886-8bb2-4d13-971f-878abe2cab6c\") " pod="service-telemetry/qdr-test" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.193781 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/68099886-8bb2-4d13-971f-878abe2cab6c-qdr-test-config\") pod \"qdr-test\" (UID: \"68099886-8bb2-4d13-971f-878abe2cab6c\") " pod="service-telemetry/qdr-test" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.294711 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/68099886-8bb2-4d13-971f-878abe2cab6c-qdr-test-config\") pod \"qdr-test\" (UID: \"68099886-8bb2-4d13-971f-878abe2cab6c\") " pod="service-telemetry/qdr-test" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.294818 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hrmzx\" (UniqueName: \"kubernetes.io/projected/68099886-8bb2-4d13-971f-878abe2cab6c-kube-api-access-hrmzx\") pod \"qdr-test\" (UID: \"68099886-8bb2-4d13-971f-878abe2cab6c\") " pod="service-telemetry/qdr-test" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.294871 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/68099886-8bb2-4d13-971f-878abe2cab6c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"68099886-8bb2-4d13-971f-878abe2cab6c\") " pod="service-telemetry/qdr-test" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.295985 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/68099886-8bb2-4d13-971f-878abe2cab6c-qdr-test-config\") pod \"qdr-test\" (UID: \"68099886-8bb2-4d13-971f-878abe2cab6c\") " pod="service-telemetry/qdr-test" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.302658 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/68099886-8bb2-4d13-971f-878abe2cab6c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"68099886-8bb2-4d13-971f-878abe2cab6c\") " pod="service-telemetry/qdr-test" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.318505 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrmzx\" (UniqueName: \"kubernetes.io/projected/68099886-8bb2-4d13-971f-878abe2cab6c-kube-api-access-hrmzx\") pod \"qdr-test\" (UID: \"68099886-8bb2-4d13-971f-878abe2cab6c\") " pod="service-telemetry/qdr-test" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.481060 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 07 10:19:25 crc kubenswrapper[5131]: I0107 10:19:25.910520 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 07 10:19:26 crc kubenswrapper[5131]: I0107 10:19:26.200096 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"68099886-8bb2-4d13-971f-878abe2cab6c","Type":"ContainerStarted","Data":"7eb4c7a090f78eb86397578160890b79e9f275a60dc58f9ffb3839fadcbbf844"} Jan 07 10:19:31 crc kubenswrapper[5131]: I0107 10:19:31.179646 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:19:31 crc kubenswrapper[5131]: E0107 10:19:31.180419 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:19:34 crc kubenswrapper[5131]: I0107 10:19:34.180091 5131 scope.go:117] "RemoveContainer" containerID="6cb917bdaa13fce39aaa9bda8598729627682d49f9edd76160f2fdeed084b82b" Jan 07 10:19:36 crc kubenswrapper[5131]: I0107 10:19:36.179867 5131 scope.go:117] "RemoveContainer" containerID="97ddd0503fd581c09536a308df02aa6315528224ec659ea44876d7c189bd286d" Jan 07 10:19:36 crc kubenswrapper[5131]: I0107 10:19:36.639982 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:19:36 crc kubenswrapper[5131]: I0107 10:19:36.641817 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:19:36 crc kubenswrapper[5131]: I0107 10:19:36.643215 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 10:19:36 crc kubenswrapper[5131]: I0107 10:19:36.645404 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.180879 5131 scope.go:117] "RemoveContainer" containerID="cea809885fb8572ffc8b8ddd543c055d9b085469c3f8a8ae18c75d72b8fa1e00" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.291150 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"68099886-8bb2-4d13-971f-878abe2cab6c","Type":"ContainerStarted","Data":"ada23148380090086025ed3da251542029e3dfdb2a6b21b696447a4bdf9f3a02"} Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.294549 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl" event={"ID":"c3630b93-73b5-4548-b124-2b200e4e5af1","Type":"ContainerStarted","Data":"102d7a5cc3aafc68054c3aee1707e9f2fb68fc2272ada72ad9d6da5540f3eb42"} Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.298334 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr" event={"ID":"2fd50338-e2ff-4265-a53d-62252487438d","Type":"ContainerStarted","Data":"e866f70ad6bc38a5f688df23f364653ab932d644c3fcd0684ca5e60fc69e6a24"} Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.309461 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.533774482 podStartE2EDuration="12.309433106s" podCreationTimestamp="2026-01-07 10:19:25 +0000 UTC" firstStartedPulling="2026-01-07 10:19:25.918038784 +0000 UTC m=+1794.084340338" lastFinishedPulling="2026-01-07 10:19:36.693697398 +0000 UTC m=+1804.859998962" observedRunningTime="2026-01-07 10:19:37.307067458 +0000 UTC m=+1805.473369032" watchObservedRunningTime="2026-01-07 10:19:37.309433106 +0000 UTC m=+1805.475734690" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.710619 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vrt8f"] Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.719452 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.721715 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vrt8f"] Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.730023 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.730137 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.730440 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.730443 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.730582 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.731468 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.881041 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.881100 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-config\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.881153 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-healthcheck-log\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.881180 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-sensubility-config\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.881207 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.881326 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.881377 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msfqm\" (UniqueName: \"kubernetes.io/projected/97b87647-a485-4586-a9db-65036bdb9190-kube-api-access-msfqm\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.983088 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.983387 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-msfqm\" (UniqueName: \"kubernetes.io/projected/97b87647-a485-4586-a9db-65036bdb9190-kube-api-access-msfqm\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.983496 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.983589 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-config\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.983679 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-healthcheck-log\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.983745 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-sensubility-config\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.983827 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.984434 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.984434 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.985095 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.985133 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-healthcheck-log\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.985291 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-config\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:37 crc kubenswrapper[5131]: I0107 10:19:37.985338 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-sensubility-config\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.008080 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-msfqm\" (UniqueName: \"kubernetes.io/projected/97b87647-a485-4586-a9db-65036bdb9190-kube-api-access-msfqm\") pod \"stf-smoketest-smoke1-vrt8f\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.052774 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.177035 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.219070 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.220572 5131 scope.go:117] "RemoveContainer" containerID="1246e602053e84f2bcbd1b0eceeea6e595103fecf42ab7f6e92158000b44cad8" Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.220640 5131 scope.go:117] "RemoveContainer" containerID="f9894b32a06f19e16f9c01c688f2ec96e51d05602a51679f2e698ac75168de86" Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.239336 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.308553 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq" event={"ID":"4a3682af-b97f-48c1-8364-7708c5442e0c","Type":"ContainerStarted","Data":"a8cc011b297d03999401e6ddb74caefa10f489a9472b7db3eab100108de0461e"} Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.390902 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rxk\" (UniqueName: \"kubernetes.io/projected/f34fc473-5524-4577-af38-592d6ad66b40-kube-api-access-v4rxk\") pod \"curl\" (UID: \"f34fc473-5524-4577-af38-592d6ad66b40\") " pod="service-telemetry/curl" Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.493493 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rxk\" (UniqueName: \"kubernetes.io/projected/f34fc473-5524-4577-af38-592d6ad66b40-kube-api-access-v4rxk\") pod \"curl\" (UID: \"f34fc473-5524-4577-af38-592d6ad66b40\") " pod="service-telemetry/curl" Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.502788 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vrt8f"] Jan 07 10:19:38 crc kubenswrapper[5131]: W0107 10:19:38.520075 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97b87647_a485_4586_a9db_65036bdb9190.slice/crio-1486fa00e5547dbc389728e3f6393b6d7b25cf1183865163e5b47aba33caa111 WatchSource:0}: Error finding container 1486fa00e5547dbc389728e3f6393b6d7b25cf1183865163e5b47aba33caa111: Status 404 returned error can't find the container with id 1486fa00e5547dbc389728e3f6393b6d7b25cf1183865163e5b47aba33caa111 Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.529513 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rxk\" (UniqueName: \"kubernetes.io/projected/f34fc473-5524-4577-af38-592d6ad66b40-kube-api-access-v4rxk\") pod \"curl\" (UID: \"f34fc473-5524-4577-af38-592d6ad66b40\") " pod="service-telemetry/curl" Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.553101 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 07 10:19:38 crc kubenswrapper[5131]: I0107 10:19:38.845451 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 07 10:19:38 crc kubenswrapper[5131]: W0107 10:19:38.856181 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf34fc473_5524_4577_af38_592d6ad66b40.slice/crio-34375782da696124160c3a25ea0c2cd9bf73a435893b88728b7be2f0cfbd2043 WatchSource:0}: Error finding container 34375782da696124160c3a25ea0c2cd9bf73a435893b88728b7be2f0cfbd2043: Status 404 returned error can't find the container with id 34375782da696124160c3a25ea0c2cd9bf73a435893b88728b7be2f0cfbd2043 Jan 07 10:19:39 crc kubenswrapper[5131]: I0107 10:19:39.318656 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f34fc473-5524-4577-af38-592d6ad66b40","Type":"ContainerStarted","Data":"34375782da696124160c3a25ea0c2cd9bf73a435893b88728b7be2f0cfbd2043"} Jan 07 10:19:39 crc kubenswrapper[5131]: I0107 10:19:39.321513 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9" event={"ID":"a8abec8e-e2e1-4c4b-a2b9-3298e289f101","Type":"ContainerStarted","Data":"03e0faa6b054408f4b363b151453a11ac3782f04623c4052ed89502e705a90cb"} Jan 07 10:19:39 crc kubenswrapper[5131]: I0107 10:19:39.327402 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv" event={"ID":"98eab5a3-19fb-4e74-808d-26b8c315d76e","Type":"ContainerStarted","Data":"e5566c2d682f5a42c37588b26cb750b4356a5413c40eac5a6dbdae81653aa801"} Jan 07 10:19:39 crc kubenswrapper[5131]: I0107 10:19:39.329111 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vrt8f" event={"ID":"97b87647-a485-4586-a9db-65036bdb9190","Type":"ContainerStarted","Data":"1486fa00e5547dbc389728e3f6393b6d7b25cf1183865163e5b47aba33caa111"} Jan 07 10:19:41 crc kubenswrapper[5131]: I0107 10:19:41.347533 5131 generic.go:358] "Generic (PLEG): container finished" podID="f34fc473-5524-4577-af38-592d6ad66b40" containerID="8d250111e408d79eff3d03cd5e800d42c71c9ead29b2055c550e0d942fd3f95f" exitCode=0 Jan 07 10:19:41 crc kubenswrapper[5131]: I0107 10:19:41.347581 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f34fc473-5524-4577-af38-592d6ad66b40","Type":"ContainerDied","Data":"8d250111e408d79eff3d03cd5e800d42c71c9ead29b2055c550e0d942fd3f95f"} Jan 07 10:19:45 crc kubenswrapper[5131]: I0107 10:19:45.180347 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:19:45 crc kubenswrapper[5131]: E0107 10:19:45.181304 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:19:48 crc kubenswrapper[5131]: I0107 10:19:48.665090 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 07 10:19:48 crc kubenswrapper[5131]: I0107 10:19:48.757350 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4rxk\" (UniqueName: \"kubernetes.io/projected/f34fc473-5524-4577-af38-592d6ad66b40-kube-api-access-v4rxk\") pod \"f34fc473-5524-4577-af38-592d6ad66b40\" (UID: \"f34fc473-5524-4577-af38-592d6ad66b40\") " Jan 07 10:19:48 crc kubenswrapper[5131]: I0107 10:19:48.761988 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34fc473-5524-4577-af38-592d6ad66b40-kube-api-access-v4rxk" (OuterVolumeSpecName: "kube-api-access-v4rxk") pod "f34fc473-5524-4577-af38-592d6ad66b40" (UID: "f34fc473-5524-4577-af38-592d6ad66b40"). InnerVolumeSpecName "kube-api-access-v4rxk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:19:48 crc kubenswrapper[5131]: I0107 10:19:48.859518 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v4rxk\" (UniqueName: \"kubernetes.io/projected/f34fc473-5524-4577-af38-592d6ad66b40-kube-api-access-v4rxk\") on node \"crc\" DevicePath \"\"" Jan 07 10:19:48 crc kubenswrapper[5131]: I0107 10:19:48.885708 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_f34fc473-5524-4577-af38-592d6ad66b40/curl/0.log" Jan 07 10:19:49 crc kubenswrapper[5131]: I0107 10:19:49.262713 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-spwzd_a2b52004-8840-457e-8c8b-a42110570a94/prometheus-webhook-snmp/0.log" Jan 07 10:19:49 crc kubenswrapper[5131]: I0107 10:19:49.409598 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 07 10:19:49 crc kubenswrapper[5131]: I0107 10:19:49.409670 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f34fc473-5524-4577-af38-592d6ad66b40","Type":"ContainerDied","Data":"34375782da696124160c3a25ea0c2cd9bf73a435893b88728b7be2f0cfbd2043"} Jan 07 10:19:49 crc kubenswrapper[5131]: I0107 10:19:49.409723 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34375782da696124160c3a25ea0c2cd9bf73a435893b88728b7be2f0cfbd2043" Jan 07 10:19:49 crc kubenswrapper[5131]: I0107 10:19:49.411875 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vrt8f" event={"ID":"97b87647-a485-4586-a9db-65036bdb9190","Type":"ContainerStarted","Data":"e9d2163f2050a6c9231baa844de66f4a42d2d97550132c2bde30f86aa577f46c"} Jan 07 10:19:55 crc kubenswrapper[5131]: I0107 10:19:55.466266 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vrt8f" event={"ID":"97b87647-a485-4586-a9db-65036bdb9190","Type":"ContainerStarted","Data":"ad5e3b42357318fd1c1f92ab74e24c23ff24de183a2e39d22f261e56e082cdc4"} Jan 07 10:19:55 crc kubenswrapper[5131]: I0107 10:19:55.504634 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-vrt8f" podStartSLOduration=2.627654255 podStartE2EDuration="18.504602758s" podCreationTimestamp="2026-01-07 10:19:37 +0000 UTC" firstStartedPulling="2026-01-07 10:19:38.523177589 +0000 UTC m=+1806.689479153" lastFinishedPulling="2026-01-07 10:19:54.400126072 +0000 UTC m=+1822.566427656" observedRunningTime="2026-01-07 10:19:55.493104314 +0000 UTC m=+1823.659405938" watchObservedRunningTime="2026-01-07 10:19:55.504602758 +0000 UTC m=+1823.670904362" Jan 07 10:19:59 crc kubenswrapper[5131]: I0107 10:19:59.180210 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:19:59 crc kubenswrapper[5131]: E0107 10:19:59.180891 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:20:00 crc kubenswrapper[5131]: I0107 10:20:00.139652 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463020-bbg74"] Jan 07 10:20:00 crc kubenswrapper[5131]: I0107 10:20:00.141106 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f34fc473-5524-4577-af38-592d6ad66b40" containerName="curl" Jan 07 10:20:00 crc kubenswrapper[5131]: I0107 10:20:00.141247 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34fc473-5524-4577-af38-592d6ad66b40" containerName="curl" Jan 07 10:20:00 crc kubenswrapper[5131]: I0107 10:20:00.141517 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="f34fc473-5524-4577-af38-592d6ad66b40" containerName="curl" Jan 07 10:20:03 crc kubenswrapper[5131]: I0107 10:20:03.436599 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463020-bbg74" Jan 07 10:20:03 crc kubenswrapper[5131]: I0107 10:20:03.445808 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463020-bbg74"] Jan 07 10:20:03 crc kubenswrapper[5131]: I0107 10:20:03.446660 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:20:03 crc kubenswrapper[5131]: I0107 10:20:03.448489 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:20:03 crc kubenswrapper[5131]: I0107 10:20:03.448917 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:20:03 crc kubenswrapper[5131]: I0107 10:20:03.583227 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27ltd\" (UniqueName: \"kubernetes.io/projected/9ecbd976-19bc-466a-8c73-9fcde5c4f266-kube-api-access-27ltd\") pod \"auto-csr-approver-29463020-bbg74\" (UID: \"9ecbd976-19bc-466a-8c73-9fcde5c4f266\") " pod="openshift-infra/auto-csr-approver-29463020-bbg74" Jan 07 10:20:03 crc kubenswrapper[5131]: I0107 10:20:03.684972 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-27ltd\" (UniqueName: \"kubernetes.io/projected/9ecbd976-19bc-466a-8c73-9fcde5c4f266-kube-api-access-27ltd\") pod \"auto-csr-approver-29463020-bbg74\" (UID: \"9ecbd976-19bc-466a-8c73-9fcde5c4f266\") " pod="openshift-infra/auto-csr-approver-29463020-bbg74" Jan 07 10:20:03 crc kubenswrapper[5131]: I0107 10:20:03.723383 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-27ltd\" (UniqueName: \"kubernetes.io/projected/9ecbd976-19bc-466a-8c73-9fcde5c4f266-kube-api-access-27ltd\") pod \"auto-csr-approver-29463020-bbg74\" (UID: \"9ecbd976-19bc-466a-8c73-9fcde5c4f266\") " pod="openshift-infra/auto-csr-approver-29463020-bbg74" Jan 07 10:20:03 crc kubenswrapper[5131]: I0107 10:20:03.761828 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463020-bbg74" Jan 07 10:20:03 crc kubenswrapper[5131]: I0107 10:20:03.992724 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463020-bbg74"] Jan 07 10:20:03 crc kubenswrapper[5131]: W0107 10:20:03.995846 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ecbd976_19bc_466a_8c73_9fcde5c4f266.slice/crio-38f265dad66da0beafa7d4dbde4531879abb20c01db0008daef91e99cd7885e2 WatchSource:0}: Error finding container 38f265dad66da0beafa7d4dbde4531879abb20c01db0008daef91e99cd7885e2: Status 404 returned error can't find the container with id 38f265dad66da0beafa7d4dbde4531879abb20c01db0008daef91e99cd7885e2 Jan 07 10:20:04 crc kubenswrapper[5131]: I0107 10:20:04.547345 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463020-bbg74" event={"ID":"9ecbd976-19bc-466a-8c73-9fcde5c4f266","Type":"ContainerStarted","Data":"38f265dad66da0beafa7d4dbde4531879abb20c01db0008daef91e99cd7885e2"} Jan 07 10:20:06 crc kubenswrapper[5131]: I0107 10:20:06.569089 5131 generic.go:358] "Generic (PLEG): container finished" podID="9ecbd976-19bc-466a-8c73-9fcde5c4f266" containerID="5e98ff950110574b915e29387b7a8135c607e2cff01ce121be49d2b6e6e1e536" exitCode=0 Jan 07 10:20:06 crc kubenswrapper[5131]: I0107 10:20:06.569226 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463020-bbg74" event={"ID":"9ecbd976-19bc-466a-8c73-9fcde5c4f266","Type":"ContainerDied","Data":"5e98ff950110574b915e29387b7a8135c607e2cff01ce121be49d2b6e6e1e536"} Jan 07 10:20:07 crc kubenswrapper[5131]: I0107 10:20:07.861295 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463020-bbg74" Jan 07 10:20:07 crc kubenswrapper[5131]: I0107 10:20:07.952416 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27ltd\" (UniqueName: \"kubernetes.io/projected/9ecbd976-19bc-466a-8c73-9fcde5c4f266-kube-api-access-27ltd\") pod \"9ecbd976-19bc-466a-8c73-9fcde5c4f266\" (UID: \"9ecbd976-19bc-466a-8c73-9fcde5c4f266\") " Jan 07 10:20:07 crc kubenswrapper[5131]: I0107 10:20:07.974566 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecbd976-19bc-466a-8c73-9fcde5c4f266-kube-api-access-27ltd" (OuterVolumeSpecName: "kube-api-access-27ltd") pod "9ecbd976-19bc-466a-8c73-9fcde5c4f266" (UID: "9ecbd976-19bc-466a-8c73-9fcde5c4f266"). InnerVolumeSpecName "kube-api-access-27ltd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:20:08 crc kubenswrapper[5131]: I0107 10:20:08.054423 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-27ltd\" (UniqueName: \"kubernetes.io/projected/9ecbd976-19bc-466a-8c73-9fcde5c4f266-kube-api-access-27ltd\") on node \"crc\" DevicePath \"\"" Jan 07 10:20:08 crc kubenswrapper[5131]: I0107 10:20:08.589597 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463020-bbg74" Jan 07 10:20:08 crc kubenswrapper[5131]: I0107 10:20:08.589720 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463020-bbg74" event={"ID":"9ecbd976-19bc-466a-8c73-9fcde5c4f266","Type":"ContainerDied","Data":"38f265dad66da0beafa7d4dbde4531879abb20c01db0008daef91e99cd7885e2"} Jan 07 10:20:08 crc kubenswrapper[5131]: I0107 10:20:08.589781 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38f265dad66da0beafa7d4dbde4531879abb20c01db0008daef91e99cd7885e2" Jan 07 10:20:08 crc kubenswrapper[5131]: I0107 10:20:08.937376 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463014-svd7q"] Jan 07 10:20:08 crc kubenswrapper[5131]: I0107 10:20:08.942322 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463014-svd7q"] Jan 07 10:20:10 crc kubenswrapper[5131]: I0107 10:20:10.189219 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d21fe3-ed1a-4c84-932c-be16c225cf34" path="/var/lib/kubelet/pods/e7d21fe3-ed1a-4c84-932c-be16c225cf34/volumes" Jan 07 10:20:13 crc kubenswrapper[5131]: I0107 10:20:13.180423 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:20:13 crc kubenswrapper[5131]: E0107 10:20:13.181110 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:20:19 crc kubenswrapper[5131]: I0107 10:20:19.460781 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-spwzd_a2b52004-8840-457e-8c8b-a42110570a94/prometheus-webhook-snmp/0.log" Jan 07 10:20:22 crc kubenswrapper[5131]: I0107 10:20:22.721010 5131 generic.go:358] "Generic (PLEG): container finished" podID="97b87647-a485-4586-a9db-65036bdb9190" containerID="e9d2163f2050a6c9231baa844de66f4a42d2d97550132c2bde30f86aa577f46c" exitCode=0 Jan 07 10:20:22 crc kubenswrapper[5131]: I0107 10:20:22.721091 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vrt8f" event={"ID":"97b87647-a485-4586-a9db-65036bdb9190","Type":"ContainerDied","Data":"e9d2163f2050a6c9231baa844de66f4a42d2d97550132c2bde30f86aa577f46c"} Jan 07 10:20:22 crc kubenswrapper[5131]: I0107 10:20:22.722359 5131 scope.go:117] "RemoveContainer" containerID="e9d2163f2050a6c9231baa844de66f4a42d2d97550132c2bde30f86aa577f46c" Jan 07 10:20:26 crc kubenswrapper[5131]: I0107 10:20:26.180811 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:20:26 crc kubenswrapper[5131]: E0107 10:20:26.181524 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:20:26 crc kubenswrapper[5131]: I0107 10:20:26.759125 5131 generic.go:358] "Generic (PLEG): container finished" podID="97b87647-a485-4586-a9db-65036bdb9190" containerID="ad5e3b42357318fd1c1f92ab74e24c23ff24de183a2e39d22f261e56e082cdc4" exitCode=0 Jan 07 10:20:26 crc kubenswrapper[5131]: I0107 10:20:26.759504 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vrt8f" event={"ID":"97b87647-a485-4586-a9db-65036bdb9190","Type":"ContainerDied","Data":"ad5e3b42357318fd1c1f92ab74e24c23ff24de183a2e39d22f261e56e082cdc4"} Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.085698 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.197523 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-entrypoint-script\") pod \"97b87647-a485-4586-a9db-65036bdb9190\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.197640 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-config\") pod \"97b87647-a485-4586-a9db-65036bdb9190\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.197725 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msfqm\" (UniqueName: \"kubernetes.io/projected/97b87647-a485-4586-a9db-65036bdb9190-kube-api-access-msfqm\") pod \"97b87647-a485-4586-a9db-65036bdb9190\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.197857 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-sensubility-config\") pod \"97b87647-a485-4586-a9db-65036bdb9190\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.197995 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-publisher\") pod \"97b87647-a485-4586-a9db-65036bdb9190\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.198049 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-healthcheck-log\") pod \"97b87647-a485-4586-a9db-65036bdb9190\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.198100 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-entrypoint-script\") pod \"97b87647-a485-4586-a9db-65036bdb9190\" (UID: \"97b87647-a485-4586-a9db-65036bdb9190\") " Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.205070 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b87647-a485-4586-a9db-65036bdb9190-kube-api-access-msfqm" (OuterVolumeSpecName: "kube-api-access-msfqm") pod "97b87647-a485-4586-a9db-65036bdb9190" (UID: "97b87647-a485-4586-a9db-65036bdb9190"). InnerVolumeSpecName "kube-api-access-msfqm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.215222 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "97b87647-a485-4586-a9db-65036bdb9190" (UID: "97b87647-a485-4586-a9db-65036bdb9190"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.216211 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "97b87647-a485-4586-a9db-65036bdb9190" (UID: "97b87647-a485-4586-a9db-65036bdb9190"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.218120 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "97b87647-a485-4586-a9db-65036bdb9190" (UID: "97b87647-a485-4586-a9db-65036bdb9190"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.220475 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "97b87647-a485-4586-a9db-65036bdb9190" (UID: "97b87647-a485-4586-a9db-65036bdb9190"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.224326 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "97b87647-a485-4586-a9db-65036bdb9190" (UID: "97b87647-a485-4586-a9db-65036bdb9190"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.238887 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "97b87647-a485-4586-a9db-65036bdb9190" (UID: "97b87647-a485-4586-a9db-65036bdb9190"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.299660 5131 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.299717 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-msfqm\" (UniqueName: \"kubernetes.io/projected/97b87647-a485-4586-a9db-65036bdb9190-kube-api-access-msfqm\") on node \"crc\" DevicePath \"\"" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.299732 5131 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.299743 5131 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.299755 5131 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.299766 5131 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.299777 5131 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/97b87647-a485-4586-a9db-65036bdb9190-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.782729 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vrt8f" event={"ID":"97b87647-a485-4586-a9db-65036bdb9190","Type":"ContainerDied","Data":"1486fa00e5547dbc389728e3f6393b6d7b25cf1183865163e5b47aba33caa111"} Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.783270 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1486fa00e5547dbc389728e3f6393b6d7b25cf1183865163e5b47aba33caa111" Jan 07 10:20:28 crc kubenswrapper[5131]: I0107 10:20:28.782753 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vrt8f" Jan 07 10:20:30 crc kubenswrapper[5131]: I0107 10:20:30.303082 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-vrt8f_97b87647-a485-4586-a9db-65036bdb9190/smoketest-collectd/0.log" Jan 07 10:20:30 crc kubenswrapper[5131]: I0107 10:20:30.610636 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-vrt8f_97b87647-a485-4586-a9db-65036bdb9190/smoketest-ceilometer/0.log" Jan 07 10:20:30 crc kubenswrapper[5131]: I0107 10:20:30.954372 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-86v7n_524128f4-0e28-42f6-8701-6c692b44e3f6/default-interconnect/0.log" Jan 07 10:20:31 crc kubenswrapper[5131]: I0107 10:20:31.259149 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl_c3630b93-73b5-4548-b124-2b200e4e5af1/bridge/2.log" Jan 07 10:20:31 crc kubenswrapper[5131]: I0107 10:20:31.599415 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-zjmxl_c3630b93-73b5-4548-b124-2b200e4e5af1/sg-core/0.log" Jan 07 10:20:31 crc kubenswrapper[5131]: I0107 10:20:31.944149 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr_2fd50338-e2ff-4265-a53d-62252487438d/bridge/2.log" Jan 07 10:20:32 crc kubenswrapper[5131]: I0107 10:20:32.312295 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-65cf5d5bcb-7fztr_2fd50338-e2ff-4265-a53d-62252487438d/sg-core/0.log" Jan 07 10:20:32 crc kubenswrapper[5131]: I0107 10:20:32.607497 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq_4a3682af-b97f-48c1-8364-7708c5442e0c/bridge/2.log" Jan 07 10:20:32 crc kubenswrapper[5131]: I0107 10:20:32.953159 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-ccnjq_4a3682af-b97f-48c1-8364-7708c5442e0c/sg-core/0.log" Jan 07 10:20:33 crc kubenswrapper[5131]: I0107 10:20:33.340074 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv_98eab5a3-19fb-4e74-808d-26b8c315d76e/bridge/2.log" Jan 07 10:20:33 crc kubenswrapper[5131]: I0107 10:20:33.676927 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7d5f9469c5-jh7bv_98eab5a3-19fb-4e74-808d-26b8c315d76e/sg-core/0.log" Jan 07 10:20:34 crc kubenswrapper[5131]: I0107 10:20:34.004451 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9_a8abec8e-e2e1-4c4b-a2b9-3298e289f101/bridge/2.log" Jan 07 10:20:34 crc kubenswrapper[5131]: I0107 10:20:34.334134 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-tlrn9_a8abec8e-e2e1-4c4b-a2b9-3298e289f101/sg-core/0.log" Jan 07 10:20:36 crc kubenswrapper[5131]: I0107 10:20:36.277561 5131 scope.go:117] "RemoveContainer" containerID="4547a702ed8b5fcf176a6a074d80a7fd1cf1266a1e1bb50b5c8fa8a5d1a80210" Jan 07 10:20:37 crc kubenswrapper[5131]: I0107 10:20:37.838507 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-55d55b9dd-hgtkz_f6080c03-03ff-4838-9364-c576264256a4/operator/0.log" Jan 07 10:20:38 crc kubenswrapper[5131]: I0107 10:20:38.219761 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_0e6a5b7c-d782-4d92-ad16-7557ea99d644/prometheus/0.log" Jan 07 10:20:38 crc kubenswrapper[5131]: I0107 10:20:38.564800 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_eebbd95e-bc5a-4c38-817e-06e8a132f328/elasticsearch/0.log" Jan 07 10:20:38 crc kubenswrapper[5131]: I0107 10:20:38.885994 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-spwzd_a2b52004-8840-457e-8c8b-a42110570a94/prometheus-webhook-snmp/0.log" Jan 07 10:20:39 crc kubenswrapper[5131]: I0107 10:20:39.231972 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_1d78a5b3-b901-4eb9-bf3c-099adf94b65d/alertmanager/0.log" Jan 07 10:20:40 crc kubenswrapper[5131]: I0107 10:20:40.184160 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:20:40 crc kubenswrapper[5131]: E0107 10:20:40.184649 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:20:54 crc kubenswrapper[5131]: I0107 10:20:54.180395 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:20:54 crc kubenswrapper[5131]: E0107 10:20:54.182260 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:20:58 crc kubenswrapper[5131]: I0107 10:20:58.063206 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-6fc67b8db8-clpch_8878a7e2-945b-4a2b-bb71-136be5087273/operator/0.log" Jan 07 10:21:01 crc kubenswrapper[5131]: I0107 10:21:01.269444 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-55d55b9dd-hgtkz_f6080c03-03ff-4838-9364-c576264256a4/operator/0.log" Jan 07 10:21:01 crc kubenswrapper[5131]: I0107 10:21:01.598442 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_68099886-8bb2-4d13-971f-878abe2cab6c/qdr/0.log" Jan 07 10:21:05 crc kubenswrapper[5131]: I0107 10:21:05.180628 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:21:05 crc kubenswrapper[5131]: E0107 10:21:05.181748 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:21:16 crc kubenswrapper[5131]: I0107 10:21:16.181356 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:21:16 crc kubenswrapper[5131]: E0107 10:21:16.182670 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:21:30 crc kubenswrapper[5131]: I0107 10:21:30.180582 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:21:30 crc kubenswrapper[5131]: E0107 10:21:30.181657 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.865560 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sw8dp"] Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.866982 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9ecbd976-19bc-466a-8c73-9fcde5c4f266" containerName="oc" Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.866998 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecbd976-19bc-466a-8c73-9fcde5c4f266" containerName="oc" Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.867021 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="97b87647-a485-4586-a9db-65036bdb9190" containerName="smoketest-ceilometer" Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.867028 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b87647-a485-4586-a9db-65036bdb9190" containerName="smoketest-ceilometer" Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.867040 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="97b87647-a485-4586-a9db-65036bdb9190" containerName="smoketest-collectd" Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.867047 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b87647-a485-4586-a9db-65036bdb9190" containerName="smoketest-collectd" Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.867219 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="97b87647-a485-4586-a9db-65036bdb9190" containerName="smoketest-ceilometer" Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.867235 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="9ecbd976-19bc-466a-8c73-9fcde5c4f266" containerName="oc" Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.867250 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="97b87647-a485-4586-a9db-65036bdb9190" containerName="smoketest-collectd" Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.896769 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sw8dp"] Jan 07 10:21:34 crc kubenswrapper[5131]: I0107 10:21:34.897011 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.028451 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-utilities\") pod \"certified-operators-sw8dp\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.028564 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-catalog-content\") pod \"certified-operators-sw8dp\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.028614 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qm8l\" (UniqueName: \"kubernetes.io/projected/28c5c3aa-7d70-462f-9052-deece3510a7c-kube-api-access-9qm8l\") pod \"certified-operators-sw8dp\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.130078 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-utilities\") pod \"certified-operators-sw8dp\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.130168 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-catalog-content\") pod \"certified-operators-sw8dp\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.130215 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9qm8l\" (UniqueName: \"kubernetes.io/projected/28c5c3aa-7d70-462f-9052-deece3510a7c-kube-api-access-9qm8l\") pod \"certified-operators-sw8dp\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.130674 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-utilities\") pod \"certified-operators-sw8dp\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.130719 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-catalog-content\") pod \"certified-operators-sw8dp\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.153507 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qm8l\" (UniqueName: \"kubernetes.io/projected/28c5c3aa-7d70-462f-9052-deece3510a7c-kube-api-access-9qm8l\") pod \"certified-operators-sw8dp\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.226027 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.419318 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sw8dp"] Jan 07 10:21:35 crc kubenswrapper[5131]: I0107 10:21:35.471717 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sw8dp" event={"ID":"28c5c3aa-7d70-462f-9052-deece3510a7c","Type":"ContainerStarted","Data":"c84fdb9da34a65796c990b5acc7c460a306f3499d00c0bfe47e2b7c73dbbc8b2"} Jan 07 10:21:36 crc kubenswrapper[5131]: I0107 10:21:36.399488 5131 scope.go:117] "RemoveContainer" containerID="d1a55cdf6d59607629d6c294e4387a398a77555a5528667926d35bc0d7bd6663" Jan 07 10:21:36 crc kubenswrapper[5131]: I0107 10:21:36.486164 5131 generic.go:358] "Generic (PLEG): container finished" podID="28c5c3aa-7d70-462f-9052-deece3510a7c" containerID="5a5853b714e23c3ba1eabf763162e2418640a35d3f44292a3b5612bafe7f3914" exitCode=0 Jan 07 10:21:36 crc kubenswrapper[5131]: I0107 10:21:36.486241 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sw8dp" event={"ID":"28c5c3aa-7d70-462f-9052-deece3510a7c","Type":"ContainerDied","Data":"5a5853b714e23c3ba1eabf763162e2418640a35d3f44292a3b5612bafe7f3914"} Jan 07 10:21:36 crc kubenswrapper[5131]: I0107 10:21:36.500999 5131 scope.go:117] "RemoveContainer" containerID="300568a7b9eefd7c441678a4c57985bf9235cc34d6e428bf264b992bbe3dce51" Jan 07 10:21:36 crc kubenswrapper[5131]: I0107 10:21:36.601601 5131 scope.go:117] "RemoveContainer" containerID="91f8f198a2120c06518d8805f2a2f86ab7263ea81cfb45db2238957c5edcb927" Jan 07 10:21:36 crc kubenswrapper[5131]: I0107 10:21:36.676794 5131 scope.go:117] "RemoveContainer" containerID="f947c8bf28d1cc3ce49bcf82348310d190cfe3032e31f9835b12da358ed040a2" Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.496022 5131 generic.go:358] "Generic (PLEG): container finished" podID="28c5c3aa-7d70-462f-9052-deece3510a7c" containerID="9fdf758354756bc4063dfc7816c16017b0ca3e0ae8df3a15388f33d4249d89ba" exitCode=0 Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.496294 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sw8dp" event={"ID":"28c5c3aa-7d70-462f-9052-deece3510a7c","Type":"ContainerDied","Data":"9fdf758354756bc4063dfc7816c16017b0ca3e0ae8df3a15388f33d4249d89ba"} Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.661269 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-96cfq/must-gather-lfw95"] Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.674312 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-96cfq/must-gather-lfw95" Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.677233 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-96cfq\"/\"kube-root-ca.crt\"" Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.678360 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-96cfq\"/\"default-dockercfg-c2spp\"" Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.678667 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-96cfq\"/\"openshift-service-ca.crt\"" Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.695475 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-96cfq/must-gather-lfw95"] Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.770959 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xzqj\" (UniqueName: \"kubernetes.io/projected/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-kube-api-access-4xzqj\") pod \"must-gather-lfw95\" (UID: \"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727\") " pod="openshift-must-gather-96cfq/must-gather-lfw95" Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.771074 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-must-gather-output\") pod \"must-gather-lfw95\" (UID: \"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727\") " pod="openshift-must-gather-96cfq/must-gather-lfw95" Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.872991 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-must-gather-output\") pod \"must-gather-lfw95\" (UID: \"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727\") " pod="openshift-must-gather-96cfq/must-gather-lfw95" Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.873319 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4xzqj\" (UniqueName: \"kubernetes.io/projected/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-kube-api-access-4xzqj\") pod \"must-gather-lfw95\" (UID: \"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727\") " pod="openshift-must-gather-96cfq/must-gather-lfw95" Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.874163 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-must-gather-output\") pod \"must-gather-lfw95\" (UID: \"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727\") " pod="openshift-must-gather-96cfq/must-gather-lfw95" Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.908131 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xzqj\" (UniqueName: \"kubernetes.io/projected/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-kube-api-access-4xzqj\") pod \"must-gather-lfw95\" (UID: \"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727\") " pod="openshift-must-gather-96cfq/must-gather-lfw95" Jan 07 10:21:37 crc kubenswrapper[5131]: I0107 10:21:37.994324 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-96cfq/must-gather-lfw95" Jan 07 10:21:38 crc kubenswrapper[5131]: I0107 10:21:38.221255 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-96cfq/must-gather-lfw95"] Jan 07 10:21:38 crc kubenswrapper[5131]: I0107 10:21:38.505567 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-96cfq/must-gather-lfw95" event={"ID":"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727","Type":"ContainerStarted","Data":"28d212747d48bb433ac7ebde6bc765c7f7346c10350b78618b4bcc6cd7208bec"} Jan 07 10:21:38 crc kubenswrapper[5131]: I0107 10:21:38.509417 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sw8dp" event={"ID":"28c5c3aa-7d70-462f-9052-deece3510a7c","Type":"ContainerStarted","Data":"52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2"} Jan 07 10:21:38 crc kubenswrapper[5131]: I0107 10:21:38.528544 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sw8dp" podStartSLOduration=3.867954067 podStartE2EDuration="4.528527119s" podCreationTimestamp="2026-01-07 10:21:34 +0000 UTC" firstStartedPulling="2026-01-07 10:21:36.487425163 +0000 UTC m=+1924.653726737" lastFinishedPulling="2026-01-07 10:21:37.147998185 +0000 UTC m=+1925.314299789" observedRunningTime="2026-01-07 10:21:38.525164675 +0000 UTC m=+1926.691466249" watchObservedRunningTime="2026-01-07 10:21:38.528527119 +0000 UTC m=+1926.694828683" Jan 07 10:21:43 crc kubenswrapper[5131]: I0107 10:21:43.180533 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:21:43 crc kubenswrapper[5131]: E0107 10:21:43.181258 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:21:45 crc kubenswrapper[5131]: I0107 10:21:45.226927 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:45 crc kubenswrapper[5131]: I0107 10:21:45.226984 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:45 crc kubenswrapper[5131]: I0107 10:21:45.265607 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:45 crc kubenswrapper[5131]: I0107 10:21:45.621138 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:45 crc kubenswrapper[5131]: I0107 10:21:45.668032 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sw8dp"] Jan 07 10:21:46 crc kubenswrapper[5131]: I0107 10:21:46.573096 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-96cfq/must-gather-lfw95" event={"ID":"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727","Type":"ContainerStarted","Data":"821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837"} Jan 07 10:21:46 crc kubenswrapper[5131]: I0107 10:21:46.573489 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-96cfq/must-gather-lfw95" event={"ID":"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727","Type":"ContainerStarted","Data":"bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720"} Jan 07 10:21:46 crc kubenswrapper[5131]: I0107 10:21:46.590805 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-96cfq/must-gather-lfw95" podStartSLOduration=1.976199484 podStartE2EDuration="9.590786705s" podCreationTimestamp="2026-01-07 10:21:37 +0000 UTC" firstStartedPulling="2026-01-07 10:21:38.240635181 +0000 UTC m=+1926.406936735" lastFinishedPulling="2026-01-07 10:21:45.855222392 +0000 UTC m=+1934.021523956" observedRunningTime="2026-01-07 10:21:46.588294423 +0000 UTC m=+1934.754595987" watchObservedRunningTime="2026-01-07 10:21:46.590786705 +0000 UTC m=+1934.757088269" Jan 07 10:21:47 crc kubenswrapper[5131]: I0107 10:21:47.584719 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sw8dp" podUID="28c5c3aa-7d70-462f-9052-deece3510a7c" containerName="registry-server" containerID="cri-o://52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2" gracePeriod=2 Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.056759 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.085235 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qm8l\" (UniqueName: \"kubernetes.io/projected/28c5c3aa-7d70-462f-9052-deece3510a7c-kube-api-access-9qm8l\") pod \"28c5c3aa-7d70-462f-9052-deece3510a7c\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.086471 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-utilities\") pod \"28c5c3aa-7d70-462f-9052-deece3510a7c\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.086585 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-catalog-content\") pod \"28c5c3aa-7d70-462f-9052-deece3510a7c\" (UID: \"28c5c3aa-7d70-462f-9052-deece3510a7c\") " Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.087741 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-utilities" (OuterVolumeSpecName: "utilities") pod "28c5c3aa-7d70-462f-9052-deece3510a7c" (UID: "28c5c3aa-7d70-462f-9052-deece3510a7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.092735 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c5c3aa-7d70-462f-9052-deece3510a7c-kube-api-access-9qm8l" (OuterVolumeSpecName: "kube-api-access-9qm8l") pod "28c5c3aa-7d70-462f-9052-deece3510a7c" (UID: "28c5c3aa-7d70-462f-9052-deece3510a7c"). InnerVolumeSpecName "kube-api-access-9qm8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.129456 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28c5c3aa-7d70-462f-9052-deece3510a7c" (UID: "28c5c3aa-7d70-462f-9052-deece3510a7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.187944 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9qm8l\" (UniqueName: \"kubernetes.io/projected/28c5c3aa-7d70-462f-9052-deece3510a7c-kube-api-access-9qm8l\") on node \"crc\" DevicePath \"\"" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.187971 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.187983 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c5c3aa-7d70-462f-9052-deece3510a7c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.595709 5131 generic.go:358] "Generic (PLEG): container finished" podID="28c5c3aa-7d70-462f-9052-deece3510a7c" containerID="52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2" exitCode=0 Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.595809 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sw8dp" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.595895 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sw8dp" event={"ID":"28c5c3aa-7d70-462f-9052-deece3510a7c","Type":"ContainerDied","Data":"52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2"} Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.595978 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sw8dp" event={"ID":"28c5c3aa-7d70-462f-9052-deece3510a7c","Type":"ContainerDied","Data":"c84fdb9da34a65796c990b5acc7c460a306f3499d00c0bfe47e2b7c73dbbc8b2"} Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.596009 5131 scope.go:117] "RemoveContainer" containerID="52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.633331 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sw8dp"] Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.634143 5131 scope.go:117] "RemoveContainer" containerID="9fdf758354756bc4063dfc7816c16017b0ca3e0ae8df3a15388f33d4249d89ba" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.657286 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sw8dp"] Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.673368 5131 scope.go:117] "RemoveContainer" containerID="5a5853b714e23c3ba1eabf763162e2418640a35d3f44292a3b5612bafe7f3914" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.698305 5131 scope.go:117] "RemoveContainer" containerID="52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2" Jan 07 10:21:48 crc kubenswrapper[5131]: E0107 10:21:48.713182 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2\": container with ID starting with 52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2 not found: ID does not exist" containerID="52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.713239 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2"} err="failed to get container status \"52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2\": rpc error: code = NotFound desc = could not find container \"52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2\": container with ID starting with 52540e7bee1b6ef4188a22daa6acc5f2d0c515d184f244fb932ad8c2872920f2 not found: ID does not exist" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.713268 5131 scope.go:117] "RemoveContainer" containerID="9fdf758354756bc4063dfc7816c16017b0ca3e0ae8df3a15388f33d4249d89ba" Jan 07 10:21:48 crc kubenswrapper[5131]: E0107 10:21:48.714109 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fdf758354756bc4063dfc7816c16017b0ca3e0ae8df3a15388f33d4249d89ba\": container with ID starting with 9fdf758354756bc4063dfc7816c16017b0ca3e0ae8df3a15388f33d4249d89ba not found: ID does not exist" containerID="9fdf758354756bc4063dfc7816c16017b0ca3e0ae8df3a15388f33d4249d89ba" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.714160 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdf758354756bc4063dfc7816c16017b0ca3e0ae8df3a15388f33d4249d89ba"} err="failed to get container status \"9fdf758354756bc4063dfc7816c16017b0ca3e0ae8df3a15388f33d4249d89ba\": rpc error: code = NotFound desc = could not find container \"9fdf758354756bc4063dfc7816c16017b0ca3e0ae8df3a15388f33d4249d89ba\": container with ID starting with 9fdf758354756bc4063dfc7816c16017b0ca3e0ae8df3a15388f33d4249d89ba not found: ID does not exist" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.714192 5131 scope.go:117] "RemoveContainer" containerID="5a5853b714e23c3ba1eabf763162e2418640a35d3f44292a3b5612bafe7f3914" Jan 07 10:21:48 crc kubenswrapper[5131]: E0107 10:21:48.714567 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a5853b714e23c3ba1eabf763162e2418640a35d3f44292a3b5612bafe7f3914\": container with ID starting with 5a5853b714e23c3ba1eabf763162e2418640a35d3f44292a3b5612bafe7f3914 not found: ID does not exist" containerID="5a5853b714e23c3ba1eabf763162e2418640a35d3f44292a3b5612bafe7f3914" Jan 07 10:21:48 crc kubenswrapper[5131]: I0107 10:21:48.714613 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a5853b714e23c3ba1eabf763162e2418640a35d3f44292a3b5612bafe7f3914"} err="failed to get container status \"5a5853b714e23c3ba1eabf763162e2418640a35d3f44292a3b5612bafe7f3914\": rpc error: code = NotFound desc = could not find container \"5a5853b714e23c3ba1eabf763162e2418640a35d3f44292a3b5612bafe7f3914\": container with ID starting with 5a5853b714e23c3ba1eabf763162e2418640a35d3f44292a3b5612bafe7f3914 not found: ID does not exist" Jan 07 10:21:50 crc kubenswrapper[5131]: I0107 10:21:50.186656 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c5c3aa-7d70-462f-9052-deece3510a7c" path="/var/lib/kubelet/pods/28c5c3aa-7d70-462f-9052-deece3510a7c/volumes" Jan 07 10:21:53 crc kubenswrapper[5131]: I0107 10:21:53.389773 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-l9hnk"] Jan 07 10:21:53 crc kubenswrapper[5131]: I0107 10:21:53.390992 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28c5c3aa-7d70-462f-9052-deece3510a7c" containerName="extract-content" Jan 07 10:21:53 crc kubenswrapper[5131]: I0107 10:21:53.391009 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c5c3aa-7d70-462f-9052-deece3510a7c" containerName="extract-content" Jan 07 10:21:53 crc kubenswrapper[5131]: I0107 10:21:53.391024 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28c5c3aa-7d70-462f-9052-deece3510a7c" containerName="registry-server" Jan 07 10:21:53 crc kubenswrapper[5131]: I0107 10:21:53.391031 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c5c3aa-7d70-462f-9052-deece3510a7c" containerName="registry-server" Jan 07 10:21:53 crc kubenswrapper[5131]: I0107 10:21:53.391046 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28c5c3aa-7d70-462f-9052-deece3510a7c" containerName="extract-utilities" Jan 07 10:21:53 crc kubenswrapper[5131]: I0107 10:21:53.391053 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c5c3aa-7d70-462f-9052-deece3510a7c" containerName="extract-utilities" Jan 07 10:21:53 crc kubenswrapper[5131]: I0107 10:21:53.391231 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="28c5c3aa-7d70-462f-9052-deece3510a7c" containerName="registry-server" Jan 07 10:21:54 crc kubenswrapper[5131]: I0107 10:21:54.813538 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-l9hnk"] Jan 07 10:21:54 crc kubenswrapper[5131]: I0107 10:21:54.813728 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-l9hnk" Jan 07 10:21:54 crc kubenswrapper[5131]: I0107 10:21:54.887470 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crblb\" (UniqueName: \"kubernetes.io/projected/0c191330-b800-4c2b-a8a4-5e05518ca7a1-kube-api-access-crblb\") pod \"infrawatch-operators-l9hnk\" (UID: \"0c191330-b800-4c2b-a8a4-5e05518ca7a1\") " pod="service-telemetry/infrawatch-operators-l9hnk" Jan 07 10:21:54 crc kubenswrapper[5131]: I0107 10:21:54.989118 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-crblb\" (UniqueName: \"kubernetes.io/projected/0c191330-b800-4c2b-a8a4-5e05518ca7a1-kube-api-access-crblb\") pod \"infrawatch-operators-l9hnk\" (UID: \"0c191330-b800-4c2b-a8a4-5e05518ca7a1\") " pod="service-telemetry/infrawatch-operators-l9hnk" Jan 07 10:21:55 crc kubenswrapper[5131]: I0107 10:21:55.008013 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-crblb\" (UniqueName: \"kubernetes.io/projected/0c191330-b800-4c2b-a8a4-5e05518ca7a1-kube-api-access-crblb\") pod \"infrawatch-operators-l9hnk\" (UID: \"0c191330-b800-4c2b-a8a4-5e05518ca7a1\") " pod="service-telemetry/infrawatch-operators-l9hnk" Jan 07 10:21:55 crc kubenswrapper[5131]: I0107 10:21:55.148894 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-l9hnk" Jan 07 10:21:55 crc kubenswrapper[5131]: I0107 10:21:55.181341 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:21:55 crc kubenswrapper[5131]: I0107 10:21:55.182642 5131 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 10:21:55 crc kubenswrapper[5131]: I0107 10:21:55.658235 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-l9hnk"] Jan 07 10:21:55 crc kubenswrapper[5131]: I0107 10:21:55.659578 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"87a2d5c5610982b3fc470ab45de6211d73250fad6e33893521dc2e60ad277d95"} Jan 07 10:21:56 crc kubenswrapper[5131]: I0107 10:21:56.669465 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-l9hnk" event={"ID":"0c191330-b800-4c2b-a8a4-5e05518ca7a1","Type":"ContainerStarted","Data":"0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3"} Jan 07 10:21:56 crc kubenswrapper[5131]: I0107 10:21:56.670168 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-l9hnk" event={"ID":"0c191330-b800-4c2b-a8a4-5e05518ca7a1","Type":"ContainerStarted","Data":"bdf412ca06ba658f2eb56f2229c1b4c23ecdf8ce2f277dab4f7c7d4af360e78c"} Jan 07 10:21:56 crc kubenswrapper[5131]: I0107 10:21:56.693891 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-l9hnk" podStartSLOduration=3.572857718 podStartE2EDuration="3.693867882s" podCreationTimestamp="2026-01-07 10:21:53 +0000 UTC" firstStartedPulling="2026-01-07 10:21:55.661346923 +0000 UTC m=+1943.827648487" lastFinishedPulling="2026-01-07 10:21:55.782357077 +0000 UTC m=+1943.948658651" observedRunningTime="2026-01-07 10:21:56.684723102 +0000 UTC m=+1944.851024686" watchObservedRunningTime="2026-01-07 10:21:56.693867882 +0000 UTC m=+1944.860169466" Jan 07 10:22:00 crc kubenswrapper[5131]: I0107 10:22:00.143406 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463022-jpl7l"] Jan 07 10:22:00 crc kubenswrapper[5131]: I0107 10:22:00.151270 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463022-jpl7l" Jan 07 10:22:00 crc kubenswrapper[5131]: I0107 10:22:00.154169 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463022-jpl7l"] Jan 07 10:22:00 crc kubenswrapper[5131]: I0107 10:22:00.157146 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:22:00 crc kubenswrapper[5131]: I0107 10:22:00.157147 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:22:00 crc kubenswrapper[5131]: I0107 10:22:00.157251 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:22:00 crc kubenswrapper[5131]: I0107 10:22:00.271364 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9f8b\" (UniqueName: \"kubernetes.io/projected/462d6a06-dce9-4c32-ae21-105edfc12ff3-kube-api-access-w9f8b\") pod \"auto-csr-approver-29463022-jpl7l\" (UID: \"462d6a06-dce9-4c32-ae21-105edfc12ff3\") " pod="openshift-infra/auto-csr-approver-29463022-jpl7l" Jan 07 10:22:00 crc kubenswrapper[5131]: I0107 10:22:00.372941 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9f8b\" (UniqueName: \"kubernetes.io/projected/462d6a06-dce9-4c32-ae21-105edfc12ff3-kube-api-access-w9f8b\") pod \"auto-csr-approver-29463022-jpl7l\" (UID: \"462d6a06-dce9-4c32-ae21-105edfc12ff3\") " pod="openshift-infra/auto-csr-approver-29463022-jpl7l" Jan 07 10:22:00 crc kubenswrapper[5131]: I0107 10:22:00.397642 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9f8b\" (UniqueName: \"kubernetes.io/projected/462d6a06-dce9-4c32-ae21-105edfc12ff3-kube-api-access-w9f8b\") pod \"auto-csr-approver-29463022-jpl7l\" (UID: \"462d6a06-dce9-4c32-ae21-105edfc12ff3\") " pod="openshift-infra/auto-csr-approver-29463022-jpl7l" Jan 07 10:22:00 crc kubenswrapper[5131]: I0107 10:22:00.486651 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463022-jpl7l" Jan 07 10:22:00 crc kubenswrapper[5131]: I0107 10:22:00.768628 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463022-jpl7l"] Jan 07 10:22:00 crc kubenswrapper[5131]: W0107 10:22:00.774226 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod462d6a06_dce9_4c32_ae21_105edfc12ff3.slice/crio-02a6b46e8e96391eed5bb18bc788cbc9e2b2979fe39228cb9fd7c97ac1a5b2a8 WatchSource:0}: Error finding container 02a6b46e8e96391eed5bb18bc788cbc9e2b2979fe39228cb9fd7c97ac1a5b2a8: Status 404 returned error can't find the container with id 02a6b46e8e96391eed5bb18bc788cbc9e2b2979fe39228cb9fd7c97ac1a5b2a8 Jan 07 10:22:01 crc kubenswrapper[5131]: I0107 10:22:01.716665 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463022-jpl7l" event={"ID":"462d6a06-dce9-4c32-ae21-105edfc12ff3","Type":"ContainerStarted","Data":"02a6b46e8e96391eed5bb18bc788cbc9e2b2979fe39228cb9fd7c97ac1a5b2a8"} Jan 07 10:22:02 crc kubenswrapper[5131]: I0107 10:22:02.744193 5131 generic.go:358] "Generic (PLEG): container finished" podID="462d6a06-dce9-4c32-ae21-105edfc12ff3" containerID="7a41426b622d1a614889309a73d2b71a56f4f67630c609977aeb5e4ca882e347" exitCode=0 Jan 07 10:22:02 crc kubenswrapper[5131]: I0107 10:22:02.744541 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463022-jpl7l" event={"ID":"462d6a06-dce9-4c32-ae21-105edfc12ff3","Type":"ContainerDied","Data":"7a41426b622d1a614889309a73d2b71a56f4f67630c609977aeb5e4ca882e347"} Jan 07 10:22:04 crc kubenswrapper[5131]: I0107 10:22:04.060048 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463022-jpl7l" Jan 07 10:22:04 crc kubenswrapper[5131]: I0107 10:22:04.140425 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9f8b\" (UniqueName: \"kubernetes.io/projected/462d6a06-dce9-4c32-ae21-105edfc12ff3-kube-api-access-w9f8b\") pod \"462d6a06-dce9-4c32-ae21-105edfc12ff3\" (UID: \"462d6a06-dce9-4c32-ae21-105edfc12ff3\") " Jan 07 10:22:04 crc kubenswrapper[5131]: I0107 10:22:04.149014 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462d6a06-dce9-4c32-ae21-105edfc12ff3-kube-api-access-w9f8b" (OuterVolumeSpecName: "kube-api-access-w9f8b") pod "462d6a06-dce9-4c32-ae21-105edfc12ff3" (UID: "462d6a06-dce9-4c32-ae21-105edfc12ff3"). InnerVolumeSpecName "kube-api-access-w9f8b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:22:04 crc kubenswrapper[5131]: I0107 10:22:04.242296 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w9f8b\" (UniqueName: \"kubernetes.io/projected/462d6a06-dce9-4c32-ae21-105edfc12ff3-kube-api-access-w9f8b\") on node \"crc\" DevicePath \"\"" Jan 07 10:22:04 crc kubenswrapper[5131]: I0107 10:22:04.767338 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463022-jpl7l" event={"ID":"462d6a06-dce9-4c32-ae21-105edfc12ff3","Type":"ContainerDied","Data":"02a6b46e8e96391eed5bb18bc788cbc9e2b2979fe39228cb9fd7c97ac1a5b2a8"} Jan 07 10:22:04 crc kubenswrapper[5131]: I0107 10:22:04.767378 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a6b46e8e96391eed5bb18bc788cbc9e2b2979fe39228cb9fd7c97ac1a5b2a8" Jan 07 10:22:04 crc kubenswrapper[5131]: I0107 10:22:04.767439 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463022-jpl7l" Jan 07 10:22:05 crc kubenswrapper[5131]: I0107 10:22:05.122886 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463016-vw4p6"] Jan 07 10:22:05 crc kubenswrapper[5131]: I0107 10:22:05.138767 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463016-vw4p6"] Jan 07 10:22:05 crc kubenswrapper[5131]: I0107 10:22:05.149585 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-l9hnk" Jan 07 10:22:05 crc kubenswrapper[5131]: I0107 10:22:05.150055 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-l9hnk" Jan 07 10:22:05 crc kubenswrapper[5131]: I0107 10:22:05.189748 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-l9hnk" Jan 07 10:22:05 crc kubenswrapper[5131]: I0107 10:22:05.814720 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-l9hnk" Jan 07 10:22:05 crc kubenswrapper[5131]: I0107 10:22:05.866804 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-l9hnk"] Jan 07 10:22:06 crc kubenswrapper[5131]: I0107 10:22:06.200309 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a45faf0-6e45-472b-a9eb-118cdf319d61" path="/var/lib/kubelet/pods/9a45faf0-6e45-472b-a9eb-118cdf319d61/volumes" Jan 07 10:22:07 crc kubenswrapper[5131]: I0107 10:22:07.798155 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-l9hnk" podUID="0c191330-b800-4c2b-a8a4-5e05518ca7a1" containerName="registry-server" containerID="cri-o://0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3" gracePeriod=2 Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.191536 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-l9hnk" Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.328390 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crblb\" (UniqueName: \"kubernetes.io/projected/0c191330-b800-4c2b-a8a4-5e05518ca7a1-kube-api-access-crblb\") pod \"0c191330-b800-4c2b-a8a4-5e05518ca7a1\" (UID: \"0c191330-b800-4c2b-a8a4-5e05518ca7a1\") " Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.343166 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c191330-b800-4c2b-a8a4-5e05518ca7a1-kube-api-access-crblb" (OuterVolumeSpecName: "kube-api-access-crblb") pod "0c191330-b800-4c2b-a8a4-5e05518ca7a1" (UID: "0c191330-b800-4c2b-a8a4-5e05518ca7a1"). InnerVolumeSpecName "kube-api-access-crblb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.429941 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-crblb\" (UniqueName: \"kubernetes.io/projected/0c191330-b800-4c2b-a8a4-5e05518ca7a1-kube-api-access-crblb\") on node \"crc\" DevicePath \"\"" Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.806271 5131 generic.go:358] "Generic (PLEG): container finished" podID="0c191330-b800-4c2b-a8a4-5e05518ca7a1" containerID="0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3" exitCode=0 Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.806617 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-l9hnk" Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.806490 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-l9hnk" event={"ID":"0c191330-b800-4c2b-a8a4-5e05518ca7a1","Type":"ContainerDied","Data":"0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3"} Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.806753 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-l9hnk" event={"ID":"0c191330-b800-4c2b-a8a4-5e05518ca7a1","Type":"ContainerDied","Data":"bdf412ca06ba658f2eb56f2229c1b4c23ecdf8ce2f277dab4f7c7d4af360e78c"} Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.806776 5131 scope.go:117] "RemoveContainer" containerID="0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3" Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.842385 5131 scope.go:117] "RemoveContainer" containerID="0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3" Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.843154 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-l9hnk"] Jan 07 10:22:08 crc kubenswrapper[5131]: E0107 10:22:08.843426 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3\": container with ID starting with 0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3 not found: ID does not exist" containerID="0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3" Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.843496 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3"} err="failed to get container status \"0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3\": rpc error: code = NotFound desc = could not find container \"0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3\": container with ID starting with 0dfd0d36c2253bf975663eaf4d04ae93d6623d1ca0e79605a44cdc25c3b2e5b3 not found: ID does not exist" Jan 07 10:22:08 crc kubenswrapper[5131]: I0107 10:22:08.849812 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-l9hnk"] Jan 07 10:22:10 crc kubenswrapper[5131]: I0107 10:22:10.188160 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c191330-b800-4c2b-a8a4-5e05518ca7a1" path="/var/lib/kubelet/pods/0c191330-b800-4c2b-a8a4-5e05518ca7a1/volumes" Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.891131 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z5z56"] Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.892517 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="462d6a06-dce9-4c32-ae21-105edfc12ff3" containerName="oc" Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.892535 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="462d6a06-dce9-4c32-ae21-105edfc12ff3" containerName="oc" Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.892569 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c191330-b800-4c2b-a8a4-5e05518ca7a1" containerName="registry-server" Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.892577 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c191330-b800-4c2b-a8a4-5e05518ca7a1" containerName="registry-server" Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.892726 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="0c191330-b800-4c2b-a8a4-5e05518ca7a1" containerName="registry-server" Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.892747 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="462d6a06-dce9-4c32-ae21-105edfc12ff3" containerName="oc" Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.901212 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.964461 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5z56"] Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.978476 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8psbm\" (UniqueName: \"kubernetes.io/projected/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-kube-api-access-8psbm\") pod \"redhat-operators-z5z56\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.978580 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-catalog-content\") pod \"redhat-operators-z5z56\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:28 crc kubenswrapper[5131]: I0107 10:22:28.978616 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-utilities\") pod \"redhat-operators-z5z56\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:29 crc kubenswrapper[5131]: I0107 10:22:29.079568 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-utilities\") pod \"redhat-operators-z5z56\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:29 crc kubenswrapper[5131]: I0107 10:22:29.079654 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8psbm\" (UniqueName: \"kubernetes.io/projected/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-kube-api-access-8psbm\") pod \"redhat-operators-z5z56\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:29 crc kubenswrapper[5131]: I0107 10:22:29.079716 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-catalog-content\") pod \"redhat-operators-z5z56\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:29 crc kubenswrapper[5131]: I0107 10:22:29.080214 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-catalog-content\") pod \"redhat-operators-z5z56\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:29 crc kubenswrapper[5131]: I0107 10:22:29.080212 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-utilities\") pod \"redhat-operators-z5z56\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:29 crc kubenswrapper[5131]: I0107 10:22:29.105402 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8psbm\" (UniqueName: \"kubernetes.io/projected/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-kube-api-access-8psbm\") pod \"redhat-operators-z5z56\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:29 crc kubenswrapper[5131]: I0107 10:22:29.277268 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:29 crc kubenswrapper[5131]: I0107 10:22:29.699020 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5z56"] Jan 07 10:22:29 crc kubenswrapper[5131]: I0107 10:22:29.997777 5131 generic.go:358] "Generic (PLEG): container finished" podID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" containerID="36f6998823eea9c7dea03d56338f74b3a42b98a8471b610a9c901ec6fd864f0b" exitCode=0 Jan 07 10:22:29 crc kubenswrapper[5131]: I0107 10:22:29.997876 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5z56" event={"ID":"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec","Type":"ContainerDied","Data":"36f6998823eea9c7dea03d56338f74b3a42b98a8471b610a9c901ec6fd864f0b"} Jan 07 10:22:29 crc kubenswrapper[5131]: I0107 10:22:29.998336 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5z56" event={"ID":"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec","Type":"ContainerStarted","Data":"d7948462af20db307e850a521e2b4f6918908c35fab8d79537589e71e0f23935"} Jan 07 10:22:31 crc kubenswrapper[5131]: I0107 10:22:31.007401 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5z56" event={"ID":"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec","Type":"ContainerStarted","Data":"68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb"} Jan 07 10:22:32 crc kubenswrapper[5131]: I0107 10:22:32.016949 5131 generic.go:358] "Generic (PLEG): container finished" podID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" containerID="68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb" exitCode=0 Jan 07 10:22:32 crc kubenswrapper[5131]: I0107 10:22:32.017014 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5z56" event={"ID":"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec","Type":"ContainerDied","Data":"68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb"} Jan 07 10:22:32 crc kubenswrapper[5131]: I0107 10:22:32.229761 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-2zfmr_18ffb9d1-d0b4-41bf-84ed-6d47984f831e/control-plane-machine-set-operator/0.log" Jan 07 10:22:32 crc kubenswrapper[5131]: I0107 10:22:32.409808 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-cw4c4_3c82fced-e466-4e52-8d61-b62e172d3ea9/kube-rbac-proxy/0.log" Jan 07 10:22:32 crc kubenswrapper[5131]: I0107 10:22:32.431699 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-cw4c4_3c82fced-e466-4e52-8d61-b62e172d3ea9/machine-api-operator/0.log" Jan 07 10:22:33 crc kubenswrapper[5131]: I0107 10:22:33.028056 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5z56" event={"ID":"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec","Type":"ContainerStarted","Data":"9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d"} Jan 07 10:22:33 crc kubenswrapper[5131]: I0107 10:22:33.048989 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z5z56" podStartSLOduration=4.205997651 podStartE2EDuration="5.048974817s" podCreationTimestamp="2026-01-07 10:22:28 +0000 UTC" firstStartedPulling="2026-01-07 10:22:29.998790769 +0000 UTC m=+1978.165092333" lastFinishedPulling="2026-01-07 10:22:30.841767925 +0000 UTC m=+1979.008069499" observedRunningTime="2026-01-07 10:22:33.045680794 +0000 UTC m=+1981.211982358" watchObservedRunningTime="2026-01-07 10:22:33.048974817 +0000 UTC m=+1981.215276381" Jan 07 10:22:36 crc kubenswrapper[5131]: I0107 10:22:36.803744 5131 scope.go:117] "RemoveContainer" containerID="df74a4092217e2b7e8717ba9e233f72568342ba718f8035b5a919337968a3e18" Jan 07 10:22:39 crc kubenswrapper[5131]: I0107 10:22:39.278479 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:39 crc kubenswrapper[5131]: I0107 10:22:39.280533 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:39 crc kubenswrapper[5131]: I0107 10:22:39.351890 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:40 crc kubenswrapper[5131]: I0107 10:22:40.280434 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:40 crc kubenswrapper[5131]: I0107 10:22:40.339371 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5z56"] Jan 07 10:22:42 crc kubenswrapper[5131]: I0107 10:22:42.258013 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z5z56" podUID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" containerName="registry-server" containerID="cri-o://9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d" gracePeriod=2 Jan 07 10:22:42 crc kubenswrapper[5131]: I0107 10:22:42.680967 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:42 crc kubenswrapper[5131]: I0107 10:22:42.805292 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-utilities\") pod \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " Jan 07 10:22:42 crc kubenswrapper[5131]: I0107 10:22:42.805390 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8psbm\" (UniqueName: \"kubernetes.io/projected/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-kube-api-access-8psbm\") pod \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " Jan 07 10:22:42 crc kubenswrapper[5131]: I0107 10:22:42.805443 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-catalog-content\") pod \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\" (UID: \"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec\") " Jan 07 10:22:42 crc kubenswrapper[5131]: I0107 10:22:42.806675 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-utilities" (OuterVolumeSpecName: "utilities") pod "f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" (UID: "f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:22:42 crc kubenswrapper[5131]: I0107 10:22:42.812407 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-kube-api-access-8psbm" (OuterVolumeSpecName: "kube-api-access-8psbm") pod "f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" (UID: "f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec"). InnerVolumeSpecName "kube-api-access-8psbm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:22:42 crc kubenswrapper[5131]: I0107 10:22:42.907430 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 10:22:42 crc kubenswrapper[5131]: I0107 10:22:42.907479 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8psbm\" (UniqueName: \"kubernetes.io/projected/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-kube-api-access-8psbm\") on node \"crc\" DevicePath \"\"" Jan 07 10:22:42 crc kubenswrapper[5131]: I0107 10:22:42.925174 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" (UID: "f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.008718 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.277115 5131 generic.go:358] "Generic (PLEG): container finished" podID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" containerID="9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d" exitCode=0 Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.277253 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5z56" event={"ID":"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec","Type":"ContainerDied","Data":"9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d"} Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.277270 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5z56" Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.277549 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5z56" event={"ID":"f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec","Type":"ContainerDied","Data":"d7948462af20db307e850a521e2b4f6918908c35fab8d79537589e71e0f23935"} Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.277445 5131 scope.go:117] "RemoveContainer" containerID="9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d" Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.308185 5131 scope.go:117] "RemoveContainer" containerID="68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb" Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.317675 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5z56"] Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.326122 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z5z56"] Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.336119 5131 scope.go:117] "RemoveContainer" containerID="36f6998823eea9c7dea03d56338f74b3a42b98a8471b610a9c901ec6fd864f0b" Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.367647 5131 scope.go:117] "RemoveContainer" containerID="9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d" Jan 07 10:22:43 crc kubenswrapper[5131]: E0107 10:22:43.368092 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d\": container with ID starting with 9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d not found: ID does not exist" containerID="9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d" Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.368137 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d"} err="failed to get container status \"9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d\": rpc error: code = NotFound desc = could not find container \"9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d\": container with ID starting with 9a8fe161b557c172160dd6936eb0238fa951b920f8ce45d96fc097ac6400f00d not found: ID does not exist" Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.368167 5131 scope.go:117] "RemoveContainer" containerID="68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb" Jan 07 10:22:43 crc kubenswrapper[5131]: E0107 10:22:43.368628 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb\": container with ID starting with 68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb not found: ID does not exist" containerID="68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb" Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.368767 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb"} err="failed to get container status \"68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb\": rpc error: code = NotFound desc = could not find container \"68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb\": container with ID starting with 68f262534239bd35cae5adf302ac1b8ed02eabfecb6cf26c44d010edaec0b5eb not found: ID does not exist" Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.368991 5131 scope.go:117] "RemoveContainer" containerID="36f6998823eea9c7dea03d56338f74b3a42b98a8471b610a9c901ec6fd864f0b" Jan 07 10:22:43 crc kubenswrapper[5131]: E0107 10:22:43.369446 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36f6998823eea9c7dea03d56338f74b3a42b98a8471b610a9c901ec6fd864f0b\": container with ID starting with 36f6998823eea9c7dea03d56338f74b3a42b98a8471b610a9c901ec6fd864f0b not found: ID does not exist" containerID="36f6998823eea9c7dea03d56338f74b3a42b98a8471b610a9c901ec6fd864f0b" Jan 07 10:22:43 crc kubenswrapper[5131]: I0107 10:22:43.369476 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36f6998823eea9c7dea03d56338f74b3a42b98a8471b610a9c901ec6fd864f0b"} err="failed to get container status \"36f6998823eea9c7dea03d56338f74b3a42b98a8471b610a9c901ec6fd864f0b\": rpc error: code = NotFound desc = could not find container \"36f6998823eea9c7dea03d56338f74b3a42b98a8471b610a9c901ec6fd864f0b\": container with ID starting with 36f6998823eea9c7dea03d56338f74b3a42b98a8471b610a9c901ec6fd864f0b not found: ID does not exist" Jan 07 10:22:44 crc kubenswrapper[5131]: I0107 10:22:44.192518 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" path="/var/lib/kubelet/pods/f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec/volumes" Jan 07 10:22:46 crc kubenswrapper[5131]: I0107 10:22:46.308357 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-ztkw9_338352b1-4821-4bab-929b-c47e7583474b/cert-manager-controller/0.log" Jan 07 10:22:46 crc kubenswrapper[5131]: I0107 10:22:46.394371 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-fj7bn_cab27a2d-a22b-44d7-83c1-f57c6d2bad11/cert-manager-cainjector/0.log" Jan 07 10:22:46 crc kubenswrapper[5131]: I0107 10:22:46.481990 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-pnzxn_874e81d2-06c0-4aad-aa39-701198f0be4d/cert-manager-webhook/0.log" Jan 07 10:23:02 crc kubenswrapper[5131]: I0107 10:23:02.809590 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g_b70cf65d-ed30-49ae-b590-19a7e38dfae7/util/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.019503 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g_b70cf65d-ed30-49ae-b590-19a7e38dfae7/pull/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.022208 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g_b70cf65d-ed30-49ae-b590-19a7e38dfae7/util/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.029862 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g_b70cf65d-ed30-49ae-b590-19a7e38dfae7/pull/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.203743 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g_b70cf65d-ed30-49ae-b590-19a7e38dfae7/util/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.208200 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g_b70cf65d-ed30-49ae-b590-19a7e38dfae7/extract/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.223252 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77d6g_b70cf65d-ed30-49ae-b590-19a7e38dfae7/pull/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.391476 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr_1a9c62ed-f7ff-4259-bd13-a84f00469f5b/util/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.587155 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr_1a9c62ed-f7ff-4259-bd13-a84f00469f5b/util/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.607256 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr_1a9c62ed-f7ff-4259-bd13-a84f00469f5b/pull/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.636428 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr_1a9c62ed-f7ff-4259-bd13-a84f00469f5b/pull/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.795166 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr_1a9c62ed-f7ff-4259-bd13-a84f00469f5b/pull/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.818252 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr_1a9c62ed-f7ff-4259-bd13-a84f00469f5b/extract/0.log" Jan 07 10:23:03 crc kubenswrapper[5131]: I0107 10:23:03.823361 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fdkscr_1a9c62ed-f7ff-4259-bd13-a84f00469f5b/util/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.029057 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph_8c0506be-0968-43b3-bd5d-2a352b0693bf/util/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.180349 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph_8c0506be-0968-43b3-bd5d-2a352b0693bf/pull/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.185198 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph_8c0506be-0968-43b3-bd5d-2a352b0693bf/util/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.216340 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph_8c0506be-0968-43b3-bd5d-2a352b0693bf/pull/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.353500 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph_8c0506be-0968-43b3-bd5d-2a352b0693bf/extract/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.353548 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph_8c0506be-0968-43b3-bd5d-2a352b0693bf/pull/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.356644 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejxcph_8c0506be-0968-43b3-bd5d-2a352b0693bf/util/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.524148 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss_e2193270-1adc-4d1b-b07b-b705d3c0fa2e/util/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.663614 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss_e2193270-1adc-4d1b-b07b-b705d3c0fa2e/pull/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.671876 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss_e2193270-1adc-4d1b-b07b-b705d3c0fa2e/util/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.696690 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss_e2193270-1adc-4d1b-b07b-b705d3c0fa2e/pull/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.845948 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss_e2193270-1adc-4d1b-b07b-b705d3c0fa2e/extract/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.856581 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss_e2193270-1adc-4d1b-b07b-b705d3c0fa2e/pull/0.log" Jan 07 10:23:04 crc kubenswrapper[5131]: I0107 10:23:04.888043 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rbdss_e2193270-1adc-4d1b-b07b-b705d3c0fa2e/util/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.045640 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-js8mt_4abda25d-f804-4184-a568-5c0fa0263526/extract-utilities/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.161094 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-js8mt_4abda25d-f804-4184-a568-5c0fa0263526/extract-utilities/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.183119 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-js8mt_4abda25d-f804-4184-a568-5c0fa0263526/extract-content/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.202157 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-js8mt_4abda25d-f804-4184-a568-5c0fa0263526/extract-content/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.351462 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-js8mt_4abda25d-f804-4184-a568-5c0fa0263526/extract-utilities/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.375372 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-js8mt_4abda25d-f804-4184-a568-5c0fa0263526/extract-content/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.609671 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-js8mt_4abda25d-f804-4184-a568-5c0fa0263526/registry-server/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.619050 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9km69_04a6ed00-35a6-41aa-a83a-f388fabdec33/extract-utilities/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.748502 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9km69_04a6ed00-35a6-41aa-a83a-f388fabdec33/extract-utilities/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.758372 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9km69_04a6ed00-35a6-41aa-a83a-f388fabdec33/extract-content/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.797007 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9km69_04a6ed00-35a6-41aa-a83a-f388fabdec33/extract-content/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.970350 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9km69_04a6ed00-35a6-41aa-a83a-f388fabdec33/extract-utilities/0.log" Jan 07 10:23:05 crc kubenswrapper[5131]: I0107 10:23:05.980209 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9km69_04a6ed00-35a6-41aa-a83a-f388fabdec33/extract-content/0.log" Jan 07 10:23:06 crc kubenswrapper[5131]: I0107 10:23:06.005752 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-x86wx_d853fb7e-12e8-4060-849f-428cc2b6e85f/marketplace-operator/0.log" Jan 07 10:23:06 crc kubenswrapper[5131]: I0107 10:23:06.146022 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7wdqb_adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31/extract-utilities/0.log" Jan 07 10:23:06 crc kubenswrapper[5131]: I0107 10:23:06.235934 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9km69_04a6ed00-35a6-41aa-a83a-f388fabdec33/registry-server/0.log" Jan 07 10:23:06 crc kubenswrapper[5131]: I0107 10:23:06.354994 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7wdqb_adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31/extract-content/0.log" Jan 07 10:23:06 crc kubenswrapper[5131]: I0107 10:23:06.380958 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7wdqb_adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31/extract-content/0.log" Jan 07 10:23:06 crc kubenswrapper[5131]: I0107 10:23:06.391013 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7wdqb_adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31/extract-utilities/0.log" Jan 07 10:23:06 crc kubenswrapper[5131]: I0107 10:23:06.534581 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7wdqb_adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31/extract-content/0.log" Jan 07 10:23:06 crc kubenswrapper[5131]: I0107 10:23:06.553337 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7wdqb_adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31/extract-utilities/0.log" Jan 07 10:23:06 crc kubenswrapper[5131]: I0107 10:23:06.819378 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7wdqb_adc35ff8-fb6e-44fb-ad67-4ba5b89e8a31/registry-server/0.log" Jan 07 10:23:19 crc kubenswrapper[5131]: I0107 10:23:19.390435 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-7jtf6_c8e50a15-61cb-4e8a-aa55-f77f526b5a0d/prometheus-operator/0.log" Jan 07 10:23:19 crc kubenswrapper[5131]: I0107 10:23:19.490518 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-795b46cc9d-lxzpk_83eb167b-dda6-4f17-b5be-fff07421691b/prometheus-operator-admission-webhook/0.log" Jan 07 10:23:19 crc kubenswrapper[5131]: I0107 10:23:19.525225 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-795b46cc9d-w9fz2_5d36a675-e1c1-4c4e-9713-b9a91a58a13c/prometheus-operator-admission-webhook/0.log" Jan 07 10:23:19 crc kubenswrapper[5131]: I0107 10:23:19.687037 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-mdzv4_991d7296-8f72-4c27-9a9c-de2becfb27dd/operator/0.log" Jan 07 10:23:19 crc kubenswrapper[5131]: I0107 10:23:19.692585 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-prwfx_4f2a238b-1c94-4943-8e48-8f6d69d3d975/perses-operator/0.log" Jan 07 10:23:56 crc kubenswrapper[5131]: I0107 10:23:56.929088 5131 generic.go:358] "Generic (PLEG): container finished" podID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" containerID="bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720" exitCode=0 Jan 07 10:23:56 crc kubenswrapper[5131]: I0107 10:23:56.929262 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-96cfq/must-gather-lfw95" event={"ID":"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727","Type":"ContainerDied","Data":"bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720"} Jan 07 10:23:56 crc kubenswrapper[5131]: I0107 10:23:56.930317 5131 scope.go:117] "RemoveContainer" containerID="bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720" Jan 07 10:23:57 crc kubenswrapper[5131]: I0107 10:23:57.479960 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-96cfq_must-gather-lfw95_f714ec3d-a1c2-4c74-b0f9-f5832c6f0727/gather/0.log" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.177390 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463024-z2d4l"] Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.179138 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" containerName="extract-content" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.179166 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" containerName="extract-content" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.179197 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" containerName="registry-server" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.179206 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" containerName="registry-server" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.179236 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" containerName="extract-utilities" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.179244 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" containerName="extract-utilities" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.179420 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="f3b6f6a6-c844-4988-ad5f-6a9a056bc8ec" containerName="registry-server" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.185818 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463024-z2d4l" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.189287 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.190241 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.190807 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.210238 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463024-z2d4l"] Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.312377 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdtp8\" (UniqueName: \"kubernetes.io/projected/15d89ab0-da0c-4dad-a943-e98b7036c7fe-kube-api-access-gdtp8\") pod \"auto-csr-approver-29463024-z2d4l\" (UID: \"15d89ab0-da0c-4dad-a943-e98b7036c7fe\") " pod="openshift-infra/auto-csr-approver-29463024-z2d4l" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.414486 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gdtp8\" (UniqueName: \"kubernetes.io/projected/15d89ab0-da0c-4dad-a943-e98b7036c7fe-kube-api-access-gdtp8\") pod \"auto-csr-approver-29463024-z2d4l\" (UID: \"15d89ab0-da0c-4dad-a943-e98b7036c7fe\") " pod="openshift-infra/auto-csr-approver-29463024-z2d4l" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.450489 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdtp8\" (UniqueName: \"kubernetes.io/projected/15d89ab0-da0c-4dad-a943-e98b7036c7fe-kube-api-access-gdtp8\") pod \"auto-csr-approver-29463024-z2d4l\" (UID: \"15d89ab0-da0c-4dad-a943-e98b7036c7fe\") " pod="openshift-infra/auto-csr-approver-29463024-z2d4l" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.517743 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463024-z2d4l" Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.834912 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463024-z2d4l"] Jan 07 10:24:00 crc kubenswrapper[5131]: I0107 10:24:00.976900 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463024-z2d4l" event={"ID":"15d89ab0-da0c-4dad-a943-e98b7036c7fe","Type":"ContainerStarted","Data":"5bd88c5d785b6e6ed0146642d2bbfe41ac3d7087395d76b8af95be11a325f946"} Jan 07 10:24:03 crc kubenswrapper[5131]: I0107 10:24:03.000024 5131 generic.go:358] "Generic (PLEG): container finished" podID="15d89ab0-da0c-4dad-a943-e98b7036c7fe" containerID="919396266a2ec22c6e8b0660ceb749b4aa372deb5a552500b91172593c7ef225" exitCode=0 Jan 07 10:24:03 crc kubenswrapper[5131]: I0107 10:24:03.000133 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463024-z2d4l" event={"ID":"15d89ab0-da0c-4dad-a943-e98b7036c7fe","Type":"ContainerDied","Data":"919396266a2ec22c6e8b0660ceb749b4aa372deb5a552500b91172593c7ef225"} Jan 07 10:24:03 crc kubenswrapper[5131]: I0107 10:24:03.631503 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-96cfq/must-gather-lfw95"] Jan 07 10:24:03 crc kubenswrapper[5131]: I0107 10:24:03.632257 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-96cfq/must-gather-lfw95" podUID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" containerName="copy" containerID="cri-o://821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837" gracePeriod=2 Jan 07 10:24:03 crc kubenswrapper[5131]: I0107 10:24:03.634720 5131 status_manager.go:895] "Failed to get status for pod" podUID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" pod="openshift-must-gather-96cfq/must-gather-lfw95" err="pods \"must-gather-lfw95\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-96cfq\": no relationship found between node 'crc' and this object" Jan 07 10:24:03 crc kubenswrapper[5131]: I0107 10:24:03.650204 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-96cfq/must-gather-lfw95"] Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.000779 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-96cfq_must-gather-lfw95_f714ec3d-a1c2-4c74-b0f9-f5832c6f0727/copy/0.log" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.002810 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-96cfq/must-gather-lfw95" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.004572 5131 status_manager.go:895] "Failed to get status for pod" podUID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" pod="openshift-must-gather-96cfq/must-gather-lfw95" err="pods \"must-gather-lfw95\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-96cfq\": no relationship found between node 'crc' and this object" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.010073 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-96cfq_must-gather-lfw95_f714ec3d-a1c2-4c74-b0f9-f5832c6f0727/copy/0.log" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.010559 5131 generic.go:358] "Generic (PLEG): container finished" podID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" containerID="821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837" exitCode=143 Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.010890 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-96cfq/must-gather-lfw95" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.010906 5131 scope.go:117] "RemoveContainer" containerID="821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.012573 5131 status_manager.go:895] "Failed to get status for pod" podUID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" pod="openshift-must-gather-96cfq/must-gather-lfw95" err="pods \"must-gather-lfw95\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-96cfq\": no relationship found between node 'crc' and this object" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.045488 5131 scope.go:117] "RemoveContainer" containerID="bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.116909 5131 scope.go:117] "RemoveContainer" containerID="821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837" Jan 07 10:24:04 crc kubenswrapper[5131]: E0107 10:24:04.117355 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837\": container with ID starting with 821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837 not found: ID does not exist" containerID="821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.117408 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837"} err="failed to get container status \"821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837\": rpc error: code = NotFound desc = could not find container \"821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837\": container with ID starting with 821daaee4f27cc966d5a7808bd48b7ce2d191fd9005dd5ea0fc5f0ac43468837 not found: ID does not exist" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.117440 5131 scope.go:117] "RemoveContainer" containerID="bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720" Jan 07 10:24:04 crc kubenswrapper[5131]: E0107 10:24:04.118043 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720\": container with ID starting with bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720 not found: ID does not exist" containerID="bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.118091 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720"} err="failed to get container status \"bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720\": rpc error: code = NotFound desc = could not find container \"bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720\": container with ID starting with bba4e5523d763abf6f3363469fb7c07ca75e6469530effe84eec61da0f1e4720 not found: ID does not exist" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.177855 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-must-gather-output\") pod \"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727\" (UID: \"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727\") " Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.178029 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xzqj\" (UniqueName: \"kubernetes.io/projected/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-kube-api-access-4xzqj\") pod \"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727\" (UID: \"f714ec3d-a1c2-4c74-b0f9-f5832c6f0727\") " Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.191102 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-kube-api-access-4xzqj" (OuterVolumeSpecName: "kube-api-access-4xzqj") pod "f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" (UID: "f714ec3d-a1c2-4c74-b0f9-f5832c6f0727"). InnerVolumeSpecName "kube-api-access-4xzqj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.228643 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" (UID: "f714ec3d-a1c2-4c74-b0f9-f5832c6f0727"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.232497 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463024-z2d4l" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.280006 5131 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.280039 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4xzqj\" (UniqueName: \"kubernetes.io/projected/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727-kube-api-access-4xzqj\") on node \"crc\" DevicePath \"\"" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.381150 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdtp8\" (UniqueName: \"kubernetes.io/projected/15d89ab0-da0c-4dad-a943-e98b7036c7fe-kube-api-access-gdtp8\") pod \"15d89ab0-da0c-4dad-a943-e98b7036c7fe\" (UID: \"15d89ab0-da0c-4dad-a943-e98b7036c7fe\") " Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.387347 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d89ab0-da0c-4dad-a943-e98b7036c7fe-kube-api-access-gdtp8" (OuterVolumeSpecName: "kube-api-access-gdtp8") pod "15d89ab0-da0c-4dad-a943-e98b7036c7fe" (UID: "15d89ab0-da0c-4dad-a943-e98b7036c7fe"). InnerVolumeSpecName "kube-api-access-gdtp8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:24:04 crc kubenswrapper[5131]: I0107 10:24:04.482288 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gdtp8\" (UniqueName: \"kubernetes.io/projected/15d89ab0-da0c-4dad-a943-e98b7036c7fe-kube-api-access-gdtp8\") on node \"crc\" DevicePath \"\"" Jan 07 10:24:05 crc kubenswrapper[5131]: I0107 10:24:05.023625 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463024-z2d4l" Jan 07 10:24:05 crc kubenswrapper[5131]: I0107 10:24:05.023702 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463024-z2d4l" event={"ID":"15d89ab0-da0c-4dad-a943-e98b7036c7fe","Type":"ContainerDied","Data":"5bd88c5d785b6e6ed0146642d2bbfe41ac3d7087395d76b8af95be11a325f946"} Jan 07 10:24:05 crc kubenswrapper[5131]: I0107 10:24:05.024119 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd88c5d785b6e6ed0146642d2bbfe41ac3d7087395d76b8af95be11a325f946" Jan 07 10:24:05 crc kubenswrapper[5131]: I0107 10:24:05.309378 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463018-c7dtt"] Jan 07 10:24:05 crc kubenswrapper[5131]: I0107 10:24:05.314036 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463018-c7dtt"] Jan 07 10:24:06 crc kubenswrapper[5131]: I0107 10:24:06.195634 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2ab93fc-e076-411d-9769-d911fd2898b1" path="/var/lib/kubelet/pods/c2ab93fc-e076-411d-9769-d911fd2898b1/volumes" Jan 07 10:24:06 crc kubenswrapper[5131]: I0107 10:24:06.197038 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" path="/var/lib/kubelet/pods/f714ec3d-a1c2-4c74-b0f9-f5832c6f0727/volumes" Jan 07 10:24:20 crc kubenswrapper[5131]: I0107 10:24:20.663939 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:24:20 crc kubenswrapper[5131]: I0107 10:24:20.664695 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:24:36 crc kubenswrapper[5131]: I0107 10:24:36.789048 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:24:36 crc kubenswrapper[5131]: I0107 10:24:36.789085 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wcqw9_a6c40a5d-1564-45c0-9d10-0d83c0bd4ee1/kube-multus/0.log" Jan 07 10:24:36 crc kubenswrapper[5131]: I0107 10:24:36.794145 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 10:24:36 crc kubenswrapper[5131]: I0107 10:24:36.795031 5131 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 07 10:24:36 crc kubenswrapper[5131]: I0107 10:24:36.992265 5131 scope.go:117] "RemoveContainer" containerID="7c193104d83fe6f04b986e7f6c25a781348580872df2c12e4eb38bb223caaa46" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.638391 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-66zjc"] Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.640747 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" containerName="gather" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.640778 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" containerName="gather" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.640820 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15d89ab0-da0c-4dad-a943-e98b7036c7fe" containerName="oc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.640871 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d89ab0-da0c-4dad-a943-e98b7036c7fe" containerName="oc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.640952 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" containerName="copy" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.640970 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" containerName="copy" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.641316 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="15d89ab0-da0c-4dad-a943-e98b7036c7fe" containerName="oc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.641361 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" containerName="gather" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.641398 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="f714ec3d-a1c2-4c74-b0f9-f5832c6f0727" containerName="copy" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.667563 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-66zjc"] Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.667756 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.732734 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-utilities\") pod \"community-operators-66zjc\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.732935 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-catalog-content\") pod \"community-operators-66zjc\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.733085 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtrsl\" (UniqueName: \"kubernetes.io/projected/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-kube-api-access-dtrsl\") pod \"community-operators-66zjc\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.834153 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-utilities\") pod \"community-operators-66zjc\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.834268 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-catalog-content\") pod \"community-operators-66zjc\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.834337 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dtrsl\" (UniqueName: \"kubernetes.io/projected/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-kube-api-access-dtrsl\") pod \"community-operators-66zjc\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.834788 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-utilities\") pod \"community-operators-66zjc\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.834849 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-catalog-content\") pod \"community-operators-66zjc\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.864084 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtrsl\" (UniqueName: \"kubernetes.io/projected/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-kube-api-access-dtrsl\") pod \"community-operators-66zjc\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:46 crc kubenswrapper[5131]: I0107 10:24:46.988220 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:47 crc kubenswrapper[5131]: I0107 10:24:47.242263 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-66zjc"] Jan 07 10:24:47 crc kubenswrapper[5131]: I0107 10:24:47.442768 5131 generic.go:358] "Generic (PLEG): container finished" podID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" containerID="1f1bd0cc740bc4845b06ee0eb325f3f7409fe34b02426bebfabad4a3b2da334b" exitCode=0 Jan 07 10:24:47 crc kubenswrapper[5131]: I0107 10:24:47.442951 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66zjc" event={"ID":"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd","Type":"ContainerDied","Data":"1f1bd0cc740bc4845b06ee0eb325f3f7409fe34b02426bebfabad4a3b2da334b"} Jan 07 10:24:47 crc kubenswrapper[5131]: I0107 10:24:47.443004 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66zjc" event={"ID":"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd","Type":"ContainerStarted","Data":"6d5b536bb22a2e3c7e26c88f885ccd74336a27206f3f524bfd21b0de4c007910"} Jan 07 10:24:48 crc kubenswrapper[5131]: I0107 10:24:48.454265 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66zjc" event={"ID":"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd","Type":"ContainerStarted","Data":"4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86"} Jan 07 10:24:49 crc kubenswrapper[5131]: I0107 10:24:49.467300 5131 generic.go:358] "Generic (PLEG): container finished" podID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" containerID="4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86" exitCode=0 Jan 07 10:24:49 crc kubenswrapper[5131]: I0107 10:24:49.467418 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66zjc" event={"ID":"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd","Type":"ContainerDied","Data":"4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86"} Jan 07 10:24:50 crc kubenswrapper[5131]: I0107 10:24:50.481490 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66zjc" event={"ID":"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd","Type":"ContainerStarted","Data":"d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234"} Jan 07 10:24:50 crc kubenswrapper[5131]: I0107 10:24:50.520719 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-66zjc" podStartSLOduration=3.883614235 podStartE2EDuration="4.52068994s" podCreationTimestamp="2026-01-07 10:24:46 +0000 UTC" firstStartedPulling="2026-01-07 10:24:47.444048354 +0000 UTC m=+2115.610349958" lastFinishedPulling="2026-01-07 10:24:48.081124099 +0000 UTC m=+2116.247425663" observedRunningTime="2026-01-07 10:24:50.510210416 +0000 UTC m=+2118.676512010" watchObservedRunningTime="2026-01-07 10:24:50.52068994 +0000 UTC m=+2118.686991544" Jan 07 10:24:50 crc kubenswrapper[5131]: I0107 10:24:50.663719 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:24:50 crc kubenswrapper[5131]: I0107 10:24:50.663830 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:24:56 crc kubenswrapper[5131]: I0107 10:24:56.989011 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:56 crc kubenswrapper[5131]: I0107 10:24:56.989712 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:57 crc kubenswrapper[5131]: I0107 10:24:57.057657 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:57 crc kubenswrapper[5131]: I0107 10:24:57.629154 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:24:57 crc kubenswrapper[5131]: I0107 10:24:57.685342 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-66zjc"] Jan 07 10:24:59 crc kubenswrapper[5131]: I0107 10:24:59.584458 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-66zjc" podUID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" containerName="registry-server" containerID="cri-o://d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234" gracePeriod=2 Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.069483 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.152534 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-catalog-content\") pod \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.152675 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-utilities\") pod \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.152744 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtrsl\" (UniqueName: \"kubernetes.io/projected/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-kube-api-access-dtrsl\") pod \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\" (UID: \"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd\") " Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.155055 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-utilities" (OuterVolumeSpecName: "utilities") pod "9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" (UID: "9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.166939 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-kube-api-access-dtrsl" (OuterVolumeSpecName: "kube-api-access-dtrsl") pod "9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" (UID: "9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd"). InnerVolumeSpecName "kube-api-access-dtrsl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.228585 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" (UID: "9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.255025 5131 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.255372 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dtrsl\" (UniqueName: \"kubernetes.io/projected/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-kube-api-access-dtrsl\") on node \"crc\" DevicePath \"\"" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.255524 5131 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.596362 5131 generic.go:358] "Generic (PLEG): container finished" podID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" containerID="d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234" exitCode=0 Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.596517 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66zjc" event={"ID":"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd","Type":"ContainerDied","Data":"d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234"} Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.596599 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66zjc" event={"ID":"9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd","Type":"ContainerDied","Data":"6d5b536bb22a2e3c7e26c88f885ccd74336a27206f3f524bfd21b0de4c007910"} Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.596623 5131 scope.go:117] "RemoveContainer" containerID="d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.597936 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66zjc" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.621281 5131 scope.go:117] "RemoveContainer" containerID="4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.653551 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-66zjc"] Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.660575 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-66zjc"] Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.664089 5131 scope.go:117] "RemoveContainer" containerID="1f1bd0cc740bc4845b06ee0eb325f3f7409fe34b02426bebfabad4a3b2da334b" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.694994 5131 scope.go:117] "RemoveContainer" containerID="d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234" Jan 07 10:25:00 crc kubenswrapper[5131]: E0107 10:25:00.695438 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234\": container with ID starting with d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234 not found: ID does not exist" containerID="d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.695479 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234"} err="failed to get container status \"d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234\": rpc error: code = NotFound desc = could not find container \"d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234\": container with ID starting with d1519a59eb38b08e7e1d5e98a4948755a2f1813d7f6750ef6885236ef4deb234 not found: ID does not exist" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.695502 5131 scope.go:117] "RemoveContainer" containerID="4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86" Jan 07 10:25:00 crc kubenswrapper[5131]: E0107 10:25:00.695746 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86\": container with ID starting with 4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86 not found: ID does not exist" containerID="4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.695779 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86"} err="failed to get container status \"4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86\": rpc error: code = NotFound desc = could not find container \"4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86\": container with ID starting with 4cec6357320950ac66bbf482aa52a0d6390b7c62a894b4a92feec64d2c44bd86 not found: ID does not exist" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.695796 5131 scope.go:117] "RemoveContainer" containerID="1f1bd0cc740bc4845b06ee0eb325f3f7409fe34b02426bebfabad4a3b2da334b" Jan 07 10:25:00 crc kubenswrapper[5131]: E0107 10:25:00.696072 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f1bd0cc740bc4845b06ee0eb325f3f7409fe34b02426bebfabad4a3b2da334b\": container with ID starting with 1f1bd0cc740bc4845b06ee0eb325f3f7409fe34b02426bebfabad4a3b2da334b not found: ID does not exist" containerID="1f1bd0cc740bc4845b06ee0eb325f3f7409fe34b02426bebfabad4a3b2da334b" Jan 07 10:25:00 crc kubenswrapper[5131]: I0107 10:25:00.696103 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1bd0cc740bc4845b06ee0eb325f3f7409fe34b02426bebfabad4a3b2da334b"} err="failed to get container status \"1f1bd0cc740bc4845b06ee0eb325f3f7409fe34b02426bebfabad4a3b2da334b\": rpc error: code = NotFound desc = could not find container \"1f1bd0cc740bc4845b06ee0eb325f3f7409fe34b02426bebfabad4a3b2da334b\": container with ID starting with 1f1bd0cc740bc4845b06ee0eb325f3f7409fe34b02426bebfabad4a3b2da334b not found: ID does not exist" Jan 07 10:25:02 crc kubenswrapper[5131]: I0107 10:25:02.196177 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" path="/var/lib/kubelet/pods/9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd/volumes" Jan 07 10:25:20 crc kubenswrapper[5131]: I0107 10:25:20.663533 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:25:20 crc kubenswrapper[5131]: I0107 10:25:20.664434 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:25:20 crc kubenswrapper[5131]: I0107 10:25:20.664526 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 10:25:20 crc kubenswrapper[5131]: I0107 10:25:20.665624 5131 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87a2d5c5610982b3fc470ab45de6211d73250fad6e33893521dc2e60ad277d95"} pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 10:25:20 crc kubenswrapper[5131]: I0107 10:25:20.665759 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" containerID="cri-o://87a2d5c5610982b3fc470ab45de6211d73250fad6e33893521dc2e60ad277d95" gracePeriod=600 Jan 07 10:25:21 crc kubenswrapper[5131]: I0107 10:25:21.806562 5131 generic.go:358] "Generic (PLEG): container finished" podID="3942e752-44ba-4678-8723-6cd778e60d73" containerID="87a2d5c5610982b3fc470ab45de6211d73250fad6e33893521dc2e60ad277d95" exitCode=0 Jan 07 10:25:21 crc kubenswrapper[5131]: I0107 10:25:21.806620 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerDied","Data":"87a2d5c5610982b3fc470ab45de6211d73250fad6e33893521dc2e60ad277d95"} Jan 07 10:25:21 crc kubenswrapper[5131]: I0107 10:25:21.807114 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerStarted","Data":"164ccb2f9cf565f3d3e11eecb64a3558adec0741d00b10f631fac401b585cd80"} Jan 07 10:25:21 crc kubenswrapper[5131]: I0107 10:25:21.807152 5131 scope.go:117] "RemoveContainer" containerID="9663cd7495facf8f3b5c9cd42ca06c0e50d8cba730f2743bbdac9e0b5db67e25" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.147713 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463026-z7974"] Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.149727 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" containerName="registry-server" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.149770 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" containerName="registry-server" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.149810 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" containerName="extract-utilities" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.149827 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" containerName="extract-utilities" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.149873 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" containerName="extract-content" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.149890 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" containerName="extract-content" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.150190 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="9e9a6fd9-b8d8-46e6-9c68-ff0f6e6e96dd" containerName="registry-server" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.159193 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463026-z7974" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.161496 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463026-z7974"] Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.165199 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.165384 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.165501 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.280893 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9pbk\" (UniqueName: \"kubernetes.io/projected/f2012e37-6dc9-47a0-9dbc-739cba0d777f-kube-api-access-b9pbk\") pod \"auto-csr-approver-29463026-z7974\" (UID: \"f2012e37-6dc9-47a0-9dbc-739cba0d777f\") " pod="openshift-infra/auto-csr-approver-29463026-z7974" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.382472 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9pbk\" (UniqueName: \"kubernetes.io/projected/f2012e37-6dc9-47a0-9dbc-739cba0d777f-kube-api-access-b9pbk\") pod \"auto-csr-approver-29463026-z7974\" (UID: \"f2012e37-6dc9-47a0-9dbc-739cba0d777f\") " pod="openshift-infra/auto-csr-approver-29463026-z7974" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.443633 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9pbk\" (UniqueName: \"kubernetes.io/projected/f2012e37-6dc9-47a0-9dbc-739cba0d777f-kube-api-access-b9pbk\") pod \"auto-csr-approver-29463026-z7974\" (UID: \"f2012e37-6dc9-47a0-9dbc-739cba0d777f\") " pod="openshift-infra/auto-csr-approver-29463026-z7974" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.481348 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463026-z7974" Jan 07 10:26:00 crc kubenswrapper[5131]: I0107 10:26:00.973678 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463026-z7974"] Jan 07 10:26:00 crc kubenswrapper[5131]: W0107 10:26:00.980239 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2012e37_6dc9_47a0_9dbc_739cba0d777f.slice/crio-a9fde159775f3ebcf6f29d5aa7d70ef6bef1cd2c4cb3217e8f900f81c6aeebbd WatchSource:0}: Error finding container a9fde159775f3ebcf6f29d5aa7d70ef6bef1cd2c4cb3217e8f900f81c6aeebbd: Status 404 returned error can't find the container with id a9fde159775f3ebcf6f29d5aa7d70ef6bef1cd2c4cb3217e8f900f81c6aeebbd Jan 07 10:26:01 crc kubenswrapper[5131]: I0107 10:26:01.218118 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463026-z7974" event={"ID":"f2012e37-6dc9-47a0-9dbc-739cba0d777f","Type":"ContainerStarted","Data":"a9fde159775f3ebcf6f29d5aa7d70ef6bef1cd2c4cb3217e8f900f81c6aeebbd"} Jan 07 10:26:02 crc kubenswrapper[5131]: I0107 10:26:02.227751 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463026-z7974" event={"ID":"f2012e37-6dc9-47a0-9dbc-739cba0d777f","Type":"ContainerStarted","Data":"c9fec18b97fc56c21ff477c6dd910c775e7d351e153e163017943cb5765b1169"} Jan 07 10:26:02 crc kubenswrapper[5131]: I0107 10:26:02.253651 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29463026-z7974" podStartSLOduration=1.415576242 podStartE2EDuration="2.253626681s" podCreationTimestamp="2026-01-07 10:26:00 +0000 UTC" firstStartedPulling="2026-01-07 10:26:00.98153298 +0000 UTC m=+2189.147834574" lastFinishedPulling="2026-01-07 10:26:01.819583419 +0000 UTC m=+2189.985885013" observedRunningTime="2026-01-07 10:26:02.242129733 +0000 UTC m=+2190.408431297" watchObservedRunningTime="2026-01-07 10:26:02.253626681 +0000 UTC m=+2190.419928245" Jan 07 10:26:03 crc kubenswrapper[5131]: I0107 10:26:03.235724 5131 generic.go:358] "Generic (PLEG): container finished" podID="f2012e37-6dc9-47a0-9dbc-739cba0d777f" containerID="c9fec18b97fc56c21ff477c6dd910c775e7d351e153e163017943cb5765b1169" exitCode=0 Jan 07 10:26:03 crc kubenswrapper[5131]: I0107 10:26:03.235801 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463026-z7974" event={"ID":"f2012e37-6dc9-47a0-9dbc-739cba0d777f","Type":"ContainerDied","Data":"c9fec18b97fc56c21ff477c6dd910c775e7d351e153e163017943cb5765b1169"} Jan 07 10:26:04 crc kubenswrapper[5131]: I0107 10:26:04.519563 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463026-z7974" Jan 07 10:26:04 crc kubenswrapper[5131]: I0107 10:26:04.649244 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9pbk\" (UniqueName: \"kubernetes.io/projected/f2012e37-6dc9-47a0-9dbc-739cba0d777f-kube-api-access-b9pbk\") pod \"f2012e37-6dc9-47a0-9dbc-739cba0d777f\" (UID: \"f2012e37-6dc9-47a0-9dbc-739cba0d777f\") " Jan 07 10:26:04 crc kubenswrapper[5131]: I0107 10:26:04.656361 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2012e37-6dc9-47a0-9dbc-739cba0d777f-kube-api-access-b9pbk" (OuterVolumeSpecName: "kube-api-access-b9pbk") pod "f2012e37-6dc9-47a0-9dbc-739cba0d777f" (UID: "f2012e37-6dc9-47a0-9dbc-739cba0d777f"). InnerVolumeSpecName "kube-api-access-b9pbk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:26:04 crc kubenswrapper[5131]: I0107 10:26:04.751017 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b9pbk\" (UniqueName: \"kubernetes.io/projected/f2012e37-6dc9-47a0-9dbc-739cba0d777f-kube-api-access-b9pbk\") on node \"crc\" DevicePath \"\"" Jan 07 10:26:05 crc kubenswrapper[5131]: I0107 10:26:05.263386 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463026-z7974" event={"ID":"f2012e37-6dc9-47a0-9dbc-739cba0d777f","Type":"ContainerDied","Data":"a9fde159775f3ebcf6f29d5aa7d70ef6bef1cd2c4cb3217e8f900f81c6aeebbd"} Jan 07 10:26:05 crc kubenswrapper[5131]: I0107 10:26:05.263429 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9fde159775f3ebcf6f29d5aa7d70ef6bef1cd2c4cb3217e8f900f81c6aeebbd" Jan 07 10:26:05 crc kubenswrapper[5131]: I0107 10:26:05.263556 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463026-z7974" Jan 07 10:26:05 crc kubenswrapper[5131]: I0107 10:26:05.270092 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463020-bbg74"] Jan 07 10:26:05 crc kubenswrapper[5131]: I0107 10:26:05.274705 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463020-bbg74"] Jan 07 10:26:06 crc kubenswrapper[5131]: I0107 10:26:06.197318 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ecbd976-19bc-466a-8c73-9fcde5c4f266" path="/var/lib/kubelet/pods/9ecbd976-19bc-466a-8c73-9fcde5c4f266/volumes" Jan 07 10:26:37 crc kubenswrapper[5131]: I0107 10:26:37.192420 5131 scope.go:117] "RemoveContainer" containerID="5e98ff950110574b915e29387b7a8135c607e2cff01ce121be49d2b6e6e1e536" Jan 07 10:27:20 crc kubenswrapper[5131]: I0107 10:27:20.663406 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:27:20 crc kubenswrapper[5131]: I0107 10:27:20.664422 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.113799 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-bxxv6"] Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.115743 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f2012e37-6dc9-47a0-9dbc-739cba0d777f" containerName="oc" Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.115771 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2012e37-6dc9-47a0-9dbc-739cba0d777f" containerName="oc" Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.116373 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="f2012e37-6dc9-47a0-9dbc-739cba0d777f" containerName="oc" Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.139471 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-bxxv6" Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.152745 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-bxxv6"] Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.191969 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs8zp\" (UniqueName: \"kubernetes.io/projected/49955907-9d95-48cd-bb81-e632040a6b7b-kube-api-access-hs8zp\") pod \"infrawatch-operators-bxxv6\" (UID: \"49955907-9d95-48cd-bb81-e632040a6b7b\") " pod="service-telemetry/infrawatch-operators-bxxv6" Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.294087 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hs8zp\" (UniqueName: \"kubernetes.io/projected/49955907-9d95-48cd-bb81-e632040a6b7b-kube-api-access-hs8zp\") pod \"infrawatch-operators-bxxv6\" (UID: \"49955907-9d95-48cd-bb81-e632040a6b7b\") " pod="service-telemetry/infrawatch-operators-bxxv6" Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.323009 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs8zp\" (UniqueName: \"kubernetes.io/projected/49955907-9d95-48cd-bb81-e632040a6b7b-kube-api-access-hs8zp\") pod \"infrawatch-operators-bxxv6\" (UID: \"49955907-9d95-48cd-bb81-e632040a6b7b\") " pod="service-telemetry/infrawatch-operators-bxxv6" Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.466782 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-bxxv6" Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.956297 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-bxxv6"] Jan 07 10:27:29 crc kubenswrapper[5131]: W0107 10:27:29.964904 5131 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49955907_9d95_48cd_bb81_e632040a6b7b.slice/crio-bfb3101e70a54176b38b88fda8d2dd6ab67c31fb731f66d6037c5bf11b34afd7 WatchSource:0}: Error finding container bfb3101e70a54176b38b88fda8d2dd6ab67c31fb731f66d6037c5bf11b34afd7: Status 404 returned error can't find the container with id bfb3101e70a54176b38b88fda8d2dd6ab67c31fb731f66d6037c5bf11b34afd7 Jan 07 10:27:29 crc kubenswrapper[5131]: I0107 10:27:29.967705 5131 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 07 10:27:30 crc kubenswrapper[5131]: I0107 10:27:30.159910 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-bxxv6" event={"ID":"49955907-9d95-48cd-bb81-e632040a6b7b","Type":"ContainerStarted","Data":"bfb3101e70a54176b38b88fda8d2dd6ab67c31fb731f66d6037c5bf11b34afd7"} Jan 07 10:27:31 crc kubenswrapper[5131]: I0107 10:27:31.171017 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-bxxv6" event={"ID":"49955907-9d95-48cd-bb81-e632040a6b7b","Type":"ContainerStarted","Data":"94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3"} Jan 07 10:27:31 crc kubenswrapper[5131]: I0107 10:27:31.193415 5131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-bxxv6" podStartSLOduration=1.822881245 podStartE2EDuration="2.193401787s" podCreationTimestamp="2026-01-07 10:27:29 +0000 UTC" firstStartedPulling="2026-01-07 10:27:29.968030264 +0000 UTC m=+2278.134331868" lastFinishedPulling="2026-01-07 10:27:30.338550806 +0000 UTC m=+2278.504852410" observedRunningTime="2026-01-07 10:27:31.191406797 +0000 UTC m=+2279.357708361" watchObservedRunningTime="2026-01-07 10:27:31.193401787 +0000 UTC m=+2279.359703351" Jan 07 10:27:39 crc kubenswrapper[5131]: I0107 10:27:39.467248 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-bxxv6" Jan 07 10:27:39 crc kubenswrapper[5131]: I0107 10:27:39.468951 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-bxxv6" Jan 07 10:27:39 crc kubenswrapper[5131]: I0107 10:27:39.518729 5131 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-bxxv6" Jan 07 10:27:40 crc kubenswrapper[5131]: I0107 10:27:40.311792 5131 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-bxxv6" Jan 07 10:27:41 crc kubenswrapper[5131]: I0107 10:27:41.874634 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-bxxv6"] Jan 07 10:27:42 crc kubenswrapper[5131]: I0107 10:27:42.289389 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-bxxv6" podUID="49955907-9d95-48cd-bb81-e632040a6b7b" containerName="registry-server" containerID="cri-o://94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3" gracePeriod=2 Jan 07 10:27:42 crc kubenswrapper[5131]: I0107 10:27:42.738270 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-bxxv6" Jan 07 10:27:42 crc kubenswrapper[5131]: I0107 10:27:42.840344 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs8zp\" (UniqueName: \"kubernetes.io/projected/49955907-9d95-48cd-bb81-e632040a6b7b-kube-api-access-hs8zp\") pod \"49955907-9d95-48cd-bb81-e632040a6b7b\" (UID: \"49955907-9d95-48cd-bb81-e632040a6b7b\") " Jan 07 10:27:42 crc kubenswrapper[5131]: I0107 10:27:42.850680 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49955907-9d95-48cd-bb81-e632040a6b7b-kube-api-access-hs8zp" (OuterVolumeSpecName: "kube-api-access-hs8zp") pod "49955907-9d95-48cd-bb81-e632040a6b7b" (UID: "49955907-9d95-48cd-bb81-e632040a6b7b"). InnerVolumeSpecName "kube-api-access-hs8zp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:27:42 crc kubenswrapper[5131]: I0107 10:27:42.943213 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hs8zp\" (UniqueName: \"kubernetes.io/projected/49955907-9d95-48cd-bb81-e632040a6b7b-kube-api-access-hs8zp\") on node \"crc\" DevicePath \"\"" Jan 07 10:27:43 crc kubenswrapper[5131]: I0107 10:27:43.297339 5131 generic.go:358] "Generic (PLEG): container finished" podID="49955907-9d95-48cd-bb81-e632040a6b7b" containerID="94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3" exitCode=0 Jan 07 10:27:43 crc kubenswrapper[5131]: I0107 10:27:43.297402 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-bxxv6" event={"ID":"49955907-9d95-48cd-bb81-e632040a6b7b","Type":"ContainerDied","Data":"94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3"} Jan 07 10:27:43 crc kubenswrapper[5131]: I0107 10:27:43.297446 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-bxxv6" Jan 07 10:27:43 crc kubenswrapper[5131]: I0107 10:27:43.297464 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-bxxv6" event={"ID":"49955907-9d95-48cd-bb81-e632040a6b7b","Type":"ContainerDied","Data":"bfb3101e70a54176b38b88fda8d2dd6ab67c31fb731f66d6037c5bf11b34afd7"} Jan 07 10:27:43 crc kubenswrapper[5131]: I0107 10:27:43.297494 5131 scope.go:117] "RemoveContainer" containerID="94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3" Jan 07 10:27:43 crc kubenswrapper[5131]: I0107 10:27:43.320540 5131 scope.go:117] "RemoveContainer" containerID="94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3" Jan 07 10:27:43 crc kubenswrapper[5131]: E0107 10:27:43.320948 5131 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3\": container with ID starting with 94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3 not found: ID does not exist" containerID="94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3" Jan 07 10:27:43 crc kubenswrapper[5131]: I0107 10:27:43.320992 5131 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3"} err="failed to get container status \"94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3\": rpc error: code = NotFound desc = could not find container \"94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3\": container with ID starting with 94b33fb98ddc99eff478c2b1cc77e630be3df65802ae9e57b6cbbb9fb8718ab3 not found: ID does not exist" Jan 07 10:27:43 crc kubenswrapper[5131]: I0107 10:27:43.340030 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-bxxv6"] Jan 07 10:27:43 crc kubenswrapper[5131]: I0107 10:27:43.347479 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-bxxv6"] Jan 07 10:27:44 crc kubenswrapper[5131]: I0107 10:27:44.196074 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49955907-9d95-48cd-bb81-e632040a6b7b" path="/var/lib/kubelet/pods/49955907-9d95-48cd-bb81-e632040a6b7b/volumes" Jan 07 10:27:50 crc kubenswrapper[5131]: I0107 10:27:50.663231 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:27:50 crc kubenswrapper[5131]: I0107 10:27:50.663956 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.147820 5131 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29463028-kqxfb"] Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.149308 5131 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49955907-9d95-48cd-bb81-e632040a6b7b" containerName="registry-server" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.149323 5131 state_mem.go:107] "Deleted CPUSet assignment" podUID="49955907-9d95-48cd-bb81-e632040a6b7b" containerName="registry-server" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.149547 5131 memory_manager.go:356] "RemoveStaleState removing state" podUID="49955907-9d95-48cd-bb81-e632040a6b7b" containerName="registry-server" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.162705 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463028-kqxfb"] Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.162845 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463028-kqxfb" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.193545 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.193822 5131 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.193886 5131 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-l8fwl\"" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.238328 5131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4lg8\" (UniqueName: \"kubernetes.io/projected/fa7bf820-a139-465d-9671-927ca76cd4f7-kube-api-access-g4lg8\") pod \"auto-csr-approver-29463028-kqxfb\" (UID: \"fa7bf820-a139-465d-9671-927ca76cd4f7\") " pod="openshift-infra/auto-csr-approver-29463028-kqxfb" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.339632 5131 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g4lg8\" (UniqueName: \"kubernetes.io/projected/fa7bf820-a139-465d-9671-927ca76cd4f7-kube-api-access-g4lg8\") pod \"auto-csr-approver-29463028-kqxfb\" (UID: \"fa7bf820-a139-465d-9671-927ca76cd4f7\") " pod="openshift-infra/auto-csr-approver-29463028-kqxfb" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.367115 5131 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4lg8\" (UniqueName: \"kubernetes.io/projected/fa7bf820-a139-465d-9671-927ca76cd4f7-kube-api-access-g4lg8\") pod \"auto-csr-approver-29463028-kqxfb\" (UID: \"fa7bf820-a139-465d-9671-927ca76cd4f7\") " pod="openshift-infra/auto-csr-approver-29463028-kqxfb" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.511755 5131 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463028-kqxfb" Jan 07 10:28:00 crc kubenswrapper[5131]: I0107 10:28:00.976359 5131 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29463028-kqxfb"] Jan 07 10:28:01 crc kubenswrapper[5131]: I0107 10:28:01.491769 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463028-kqxfb" event={"ID":"fa7bf820-a139-465d-9671-927ca76cd4f7","Type":"ContainerStarted","Data":"f5f866231d51a787804800e6a1834e252f242b2d0e6fcc5a23cfa6998253695b"} Jan 07 10:28:03 crc kubenswrapper[5131]: I0107 10:28:03.508193 5131 generic.go:358] "Generic (PLEG): container finished" podID="fa7bf820-a139-465d-9671-927ca76cd4f7" containerID="3d1e21780e05789a71850c28d6b819b2f11f1f792045b9f0a9f8924fd6c77981" exitCode=0 Jan 07 10:28:03 crc kubenswrapper[5131]: I0107 10:28:03.508376 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463028-kqxfb" event={"ID":"fa7bf820-a139-465d-9671-927ca76cd4f7","Type":"ContainerDied","Data":"3d1e21780e05789a71850c28d6b819b2f11f1f792045b9f0a9f8924fd6c77981"} Jan 07 10:28:04 crc kubenswrapper[5131]: I0107 10:28:04.877972 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463028-kqxfb" Jan 07 10:28:04 crc kubenswrapper[5131]: I0107 10:28:04.909568 5131 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4lg8\" (UniqueName: \"kubernetes.io/projected/fa7bf820-a139-465d-9671-927ca76cd4f7-kube-api-access-g4lg8\") pod \"fa7bf820-a139-465d-9671-927ca76cd4f7\" (UID: \"fa7bf820-a139-465d-9671-927ca76cd4f7\") " Jan 07 10:28:04 crc kubenswrapper[5131]: I0107 10:28:04.921129 5131 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa7bf820-a139-465d-9671-927ca76cd4f7-kube-api-access-g4lg8" (OuterVolumeSpecName: "kube-api-access-g4lg8") pod "fa7bf820-a139-465d-9671-927ca76cd4f7" (UID: "fa7bf820-a139-465d-9671-927ca76cd4f7"). InnerVolumeSpecName "kube-api-access-g4lg8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 07 10:28:05 crc kubenswrapper[5131]: I0107 10:28:05.011335 5131 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g4lg8\" (UniqueName: \"kubernetes.io/projected/fa7bf820-a139-465d-9671-927ca76cd4f7-kube-api-access-g4lg8\") on node \"crc\" DevicePath \"\"" Jan 07 10:28:05 crc kubenswrapper[5131]: I0107 10:28:05.531561 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29463028-kqxfb" event={"ID":"fa7bf820-a139-465d-9671-927ca76cd4f7","Type":"ContainerDied","Data":"f5f866231d51a787804800e6a1834e252f242b2d0e6fcc5a23cfa6998253695b"} Jan 07 10:28:05 crc kubenswrapper[5131]: I0107 10:28:05.531999 5131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5f866231d51a787804800e6a1834e252f242b2d0e6fcc5a23cfa6998253695b" Jan 07 10:28:05 crc kubenswrapper[5131]: I0107 10:28:05.531589 5131 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29463028-kqxfb" Jan 07 10:28:05 crc kubenswrapper[5131]: I0107 10:28:05.954811 5131 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29463022-jpl7l"] Jan 07 10:28:05 crc kubenswrapper[5131]: I0107 10:28:05.959982 5131 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29463022-jpl7l"] Jan 07 10:28:06 crc kubenswrapper[5131]: I0107 10:28:06.197328 5131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462d6a06-dce9-4c32-ae21-105edfc12ff3" path="/var/lib/kubelet/pods/462d6a06-dce9-4c32-ae21-105edfc12ff3/volumes" Jan 07 10:28:20 crc kubenswrapper[5131]: I0107 10:28:20.663139 5131 patch_prober.go:28] interesting pod/machine-config-daemon-dvdrn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 07 10:28:20 crc kubenswrapper[5131]: I0107 10:28:20.663809 5131 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 07 10:28:20 crc kubenswrapper[5131]: I0107 10:28:20.664072 5131 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" Jan 07 10:28:20 crc kubenswrapper[5131]: I0107 10:28:20.665110 5131 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"164ccb2f9cf565f3d3e11eecb64a3558adec0741d00b10f631fac401b585cd80"} pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 07 10:28:20 crc kubenswrapper[5131]: I0107 10:28:20.665212 5131 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" containerName="machine-config-daemon" containerID="cri-o://164ccb2f9cf565f3d3e11eecb64a3558adec0741d00b10f631fac401b585cd80" gracePeriod=600 Jan 07 10:28:20 crc kubenswrapper[5131]: E0107 10:28:20.797623 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:28:21 crc kubenswrapper[5131]: I0107 10:28:21.708189 5131 generic.go:358] "Generic (PLEG): container finished" podID="3942e752-44ba-4678-8723-6cd778e60d73" containerID="164ccb2f9cf565f3d3e11eecb64a3558adec0741d00b10f631fac401b585cd80" exitCode=0 Jan 07 10:28:21 crc kubenswrapper[5131]: I0107 10:28:21.708353 5131 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" event={"ID":"3942e752-44ba-4678-8723-6cd778e60d73","Type":"ContainerDied","Data":"164ccb2f9cf565f3d3e11eecb64a3558adec0741d00b10f631fac401b585cd80"} Jan 07 10:28:21 crc kubenswrapper[5131]: I0107 10:28:21.708393 5131 scope.go:117] "RemoveContainer" containerID="87a2d5c5610982b3fc470ab45de6211d73250fad6e33893521dc2e60ad277d95" Jan 07 10:28:21 crc kubenswrapper[5131]: I0107 10:28:21.708975 5131 scope.go:117] "RemoveContainer" containerID="164ccb2f9cf565f3d3e11eecb64a3558adec0741d00b10f631fac401b585cd80" Jan 07 10:28:21 crc kubenswrapper[5131]: E0107 10:28:21.709287 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:28:36 crc kubenswrapper[5131]: I0107 10:28:36.180751 5131 scope.go:117] "RemoveContainer" containerID="164ccb2f9cf565f3d3e11eecb64a3558adec0741d00b10f631fac401b585cd80" Jan 07 10:28:36 crc kubenswrapper[5131]: E0107 10:28:36.182367 5131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvdrn_openshift-machine-config-operator(3942e752-44ba-4678-8723-6cd778e60d73)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvdrn" podUID="3942e752-44ba-4678-8723-6cd778e60d73" Jan 07 10:28:37 crc kubenswrapper[5131]: I0107 10:28:37.348588 5131 scope.go:117] "RemoveContainer" containerID="7a41426b622d1a614889309a73d2b71a56f4f67630c609977aeb5e4ca882e347"